Exhaustively feature selection in scikit-learn?

exhaustive feature selection python
univariate feature selection
feature selection for clustering python
exhaustive feature selection sklearn
pca feature selection python
feature selection categorical variables python
feature selection datacamp
feature selection methods for classification

Is there any built in way of doing brute-force feature selection in scikit-learn ? I.e. Exhaustively evaluate all possible combinations of the input features, and then find the best subset. I familiar with the "Recursive feature elimination" class but I specifically interesting in evaluate all possible combinations of the input features one after the other.

No, best subset selection is not implemented. The easiest way to do it is to write it yourself. This should get you started:

from itertools import chain, combinations
from sklearn.cross_validation import cross_val_score

def best_subset_cv(estimator, X, y, cv=3):
    n_features = X.shape[1]
    subsets = chain.from_iterable(combinations(xrange(k), k + 1)
                                  for k in xrange(n_features))

    best_score = -np.inf
    best_subset = None
    for subset in subsets:
        score = cross_val_score(estimator, X[:, subset], y, cv=cv).mean()
        if score > best_score:
            best_score, best_subset = score, subset

    return best_subset, best_score

This performs k-fold cross-validation inside the loop, so it will fit k 2 estimators when giving data with p features.

Exhaustive Feature Selector, Removing features with low variance¶. VarianceThreshold is a simple baseline approach to feature selection. It removes all features whose variance doesn't  Scikit-learn exposes feature selection routines as objects that implement the transform method: SelectKBest removes all but the \(k\) highest scoring features SelectPercentile removes all but a user-specified highest scoring percentage of features

Combining the answer of Fred Foo and the comments of nopper, ihadanny and jimijazz, the following code gets the same results as the R function regsubsets() (part of the leaps library) for the first example in Lab 1 (6.5.1 Best Subset Selection) in the book "An Introduction to Statistical Learning with Applications in R".

from itertools import combinations
from sklearn.cross_validation import cross_val_score

def best_subset(estimator, X, y, max_size=8, cv=5):
'''Calculates the best model of up to max_size features of X.
   estimator must have a fit and score functions.
   X must be a DataFrame.'''

    n_features = X.shape[1]
    subsets = (combinations(range(n_features), k + 1) 
               for k in range(min(n_features, max_size)))

    best_size_subset = []
    for subsets_k in subsets:  # for each list of subsets of the same size
        best_score = -np.inf
        best_subset = None
        for subset in subsets_k: # for each subset
            estimator.fit(X.iloc[:, list(subset)], y)
            # get the subset with the best score among subsets of the same size
            score = estimator.score(X.iloc[:, list(subset)], y)
            if score > best_score:
                best_score, best_subset = score, subset
        # to compare subsets of different sizes we must use CV
        # first store the best subset of each size
        best_size_subset.append(best_subset)

    # compare best subsets of each size
    best_score = -np.inf
    best_subset = None
    list_scores = []
    for subset in best_size_subset:
        score = cross_val_score(estimator, X.iloc[:, list(subset)], y, cv=cv).mean()
        list_scores.append(score)
        if score > best_score:
            best_score, best_subset = score, subset

    return best_subset, best_score, best_size_subset, list_scores

See notebook at http://nbviewer.jupyter.org/github/pedvide/ISLR_Python/blob/master/Chapter6_Linear_Model_Selection_and_Regularization.ipynb#6.5.1-Best-Subset-Selection

1.13. Feature selection, Learn the basics of feature selection in PYTHON and how to always provide the best subset of features because of their exhaustive nature. 2. There are several feature selection method in scikit-learn, different method may select different subset, how do I know which subset or method is more suitable? 3. When I build a machine learning model, the performance of the model seems more related to the number of features.

You might want to take a look at MLxtend's Exhaustive Feature Selector. It is obviously not built into scikit-learn (yet?) but does support its classifier and regressor objects.

1.13. Feature selection, For any set of n features, there exist 2^n different subsets, so clearly an exhaustive search is out of the question for any data containing more than  Feature selection is a process which helps you identify those variables which are statistically relevant. In python, the sklearn module provides a nice and easy to use methods for feature selection. In this article, we see how to use sklearn for implementing some of the most popular feature selection methods like SelectFromModel (with LASSO

(Tutorial) Feature Selection in Python, Is there any built in way of doing brute-force feature selection in scikit-learn ? I.e. Exhaustively evaluate all possible combinations of the input features, and then  After run feature selection in scikit-learn I would like to expose relevant variables, show me the variables that was select from method, how is it possible? The command X.shape just show the number of variables, I wanna see the name of variables after feature selection.

Smart Feature Selection with scikit-learn and BigML's API, Tuning, Training, and Evaluating Models with Scikit-learn (3) A feature selection algorithm is applied to reduce the number of features. GridSearchCV will perform an exhaustive search over the hyper-parameter grid and  Univariate Feature Selection¶ An example showing univariate feature selection. Noisy (non informative) features are added to the iris data and univariate feature selection is applied. For each feature, we plot the p-values for the univariate feature selection and the corresponding weights of an SVM.

Exhaustively feature selection in scikit-learn? - scikit-learn - iOS, Let's implement step forward feature selection in Python. In exhaustive feature selection, the performance of a machine learning algorithm is evaluated against  Feature Selection for Machine Learning. This section lists 4 feature selection recipes for machine learning in Python. This post contains recipes for feature selection methods. Each recipe was designed to be complete and standalone so that you can copy-and-paste it directly into you project and use it immediately.

Comments
  • @AbhishekThakur Thanks. but No, I want a "stupid" brute-force feature selection -- actually I can do it in a loop over all combinations . But prefer a built in method/pipeline if such exists??
  • Thanks for your answer !!
  • There's an error in the code. It should be combinations(xrange(n_features)).
  • Performance tip - when comparing between different models of the same size k it is unnecessary to perform cv - it is enough to compare a train-set statistic such as R^2. Only when comparing best candidates of different sizes cv is necessary. See chapter 6 in this excellent book: www-bcf.usc.edu/~gareth/ISL
  • also, for sklearn 0.22 you must slice the input like this: X.iloc[:, list(subset)]