通用范例 General Examples - Ex 2: Concatenating multiple feature extraction methods

优质
小牛编辑
124浏览
2023-12-01

http://scikit-learn.org/stable/auto_examples/feature_stacker.html

在许多实际应用中,会有很多方法可以从一个数据集中提取特征。也常常会组合多个方法来获得良好的特征。这个例子说明如何使用FeatureUnion 来结合由PCAunivariate selection 时的特征。

这个范例的主要目的:

  1. 资料集:iris 鸢尾花资料集
  2. 特征:鸢尾花特征
  3. 预测目标:是那一种鸢尾花
  4. 机器学习方法:SVM 支持向量机
  5. 探讨重点:特征结合
  6. 关键函式: sklearn.pipeline.FeatureUnion

(一)资料汇入及描述

  • 首先先汇入iris 鸢尾花资料集,使用from sklearn.datasets import load_iris将资料存入
  • 准备X (特征资料) 以及 y (目标资料)
  1. from sklearn.pipeline import Pipeline, FeatureUnion
  2. from sklearn.grid_search import GridSearchCV
  3. from sklearn.svm import SVC
  4. from sklearn.datasets import load_iris
  5. from sklearn.decomposition import PCA
  6. from sklearn.feature_selection import SelectKBest
  7. iris = load_iris()
  8. X, y = iris.data, iris.target

测试资料:

iris为一个dict型别资料。

显示说明
(‘target_names’, (3L,))共有三种鸢尾花 setosa, versicolor, virginica
(‘data’, (150L, 4L))有150笔资料,共四种特征
(‘target’, (150L,))这150笔资料各是那一种鸢尾花
DESCR资料之描述
feature_names4个特征代表的意义

(二)PCA与SelectKBest

  • PCA(n_components = 主要成份数量):Principal Component Analysis(PCA)主成份分析,是一个常用的将资料维度减少的方法。它的原理是找出一个新的座标轴,将资料投影到该轴时,数据的变异量会最大。利用这个方式减少资料维度,又希望能保留住原数据点的特性。

  • SelectKBest(score_func , k ): score_func是选择特征值所依据的函式,而K值则是设定要选出多少特征。

  1. # This dataset is way to high-dimensional. Better do PCA:
  2. pca = PCA(n_components=2)
  3. # Maybe some original features where good, too?
  4. selection = SelectKBest(k=1)

(三)FeatureUnionc

  • 使用sklearn.pipeline.FeatureUnion合併主成分分析(PCA)和综合筛选(SelectKBest)。
  • 最后得到选出的特征
  1. # Build estimator from PCA and Univariate selection:
  2. combined_features = FeatureUnion([("pca", pca), ("univ_select", selection)])
  3. # Use combined features to transform dataset:
  4. X_features = combined_features.fit(X, y).transform(X)

(四)找到最佳的结果

  • Scikit-learn的支持向量机分类函式库利用 SVC() 建立运算物件,之后并可以用运算物件内的方法 .fit() 与 .predict() 来做训练与预测。

  • 使用GridSearchCV交叉验证,得到由参数网格计算出的分数网格,并找到分数网格中最佳点。最后显示这个点所代表的参数

  1. svm = SVC(kernel="linear")
  2. # Do grid search over k, n_components and C:
  3. pipeline = Pipeline([("features", combined_features), ("svm", svm)])
  4. param_grid = dict(features__pca__n_components=[1, 2, 3],
  5. features__univ_select__k=[1, 2],
  6. svm__C=[0.1, 1, 10])
  7. grid_search = GridSearchCV(pipeline, param_grid=param_grid, verbose=10)
  8. grid_search.fit(X, y)
  9. print(grid_search.best_estimator_)

结果显示
``` Fitting 3 folds for each of 18 candidates, totalling 54 fits
[CV] featuresuniv_selectk=1, featurespcan_components=1, svmC=0.1
[CV] features
univ_selectk=1, featurespcan_components=1, svmC=0.1, score=0.960784 - 0.0s

  1. ## (五)完整程式码
  2. Python source code: feature_stacker.py
  3. http://scikit-learn.org/stable/auto_examples/feature_stacker.html
  4. ```python
  5. # Author: Andreas Mueller <amueller@ais.uni-bonn.de>
  6. #
  7. # License: BSD 3 clause
  8. from sklearn.pipeline import Pipeline, FeatureUnion
  9. from sklearn.grid_search import GridSearchCV
  10. from sklearn.svm import SVC
  11. from sklearn.datasets import load_iris
  12. from sklearn.decomposition import PCA
  13. from sklearn.feature_selection import SelectKBest
  14. iris = load_iris()
  15. X, y = iris.data, iris.target
  16. # This dataset is way to high-dimensional. Better do PCA:
  17. pca = PCA(n_components=2)
  18. # Maybe some original features where good, too?
  19. selection = SelectKBest(k=1)
  20. # Build estimator from PCA and Univariate selection:
  21. combined_features = FeatureUnion([("pca", pca), ("univ_select", selection)])
  22. # Use combined features to transform dataset:
  23. X_features = combined_features.fit(X, y).transform(X)
  24. svm = SVC(kernel="linear")
  25. # Do grid search over k, n_components and C:
  26. pipeline = Pipeline([("features", combined_features), ("svm", svm)])
  27. param_grid = dict(features__pca__n_components=[1, 2, 3],
  28. features__univ_select__k=[1, 2],
  29. svm__C=[0.1, 1, 10])
  30. grid_search = GridSearchCV(pipeline, param_grid=param_grid, verbose=10)
  31. grid_search.fit(X, y)
  32. print(grid_search.best_estimator_)