Subscribe to DSC Newsletter

I have few basic question about feature selection:

1. I saw several articles and examples of feature selection (wrapper & embedded methods) where they split the samples data into train and test sets.
I understand why we need to cross-validate (split the data into train &test set) for **building and testing the score of the models (actual predict propose algorithm) **.
But I can't understand what is the motivation to do so for feature selection ?
What is the benefit ?

2. In IEEE, there are a lot of new algorithms and techniques for feature selection (i.e FCBF - Fast Correlation Based Filter) which are **not implemented** in sklearn.
Can I guess that if those algorithms for feature selection are not implemented in sklearn, they are not popular ?
Do you recommend to implement some of those algorithms and test them ?

3. sklearn have several filter methods (SelectKBest, SelectPercentile ).
All those methods take K as input parameter, which tell the method what is the number of features (or percentage) to select.
How can I know what is the best K to choose ? (it seems that I need to check some values of K and select the best subset). Am I right ?

Views: 428

Reply to This


  • Add Videos
  • View All

© 2020   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service