Scikitlearn的PolynomialFeatures有助于多项式特征生成。
这是一个简单的例子:
import numpy as np
import pandas as pd
from sklearn.preprocessing import PolynomialFeatures
# Example data:
X = np.arange(6).reshape(3, 2)
# Works fine
poly = PolynomialFeatures(2)
pd.DataFrame(poly.fit_transform(X))
0 1 2 3 4 5
0 1 0 1 0 0 1
1 1 2 3 4 6 9
2 1 4 5 16 20 25
问题:是否有能力仅将多项式变换应用于指定的要素列表?
e.g。
# Use previous dataframe
X2 = X.copy()
# Categorical feature will be handled
# by a one hot encoder in another feature generation step
X2['animal'] = ['dog', 'dog', 'cat']
# Don't try to poly transform the animal column
poly2 = PolynomialFeatures(2, cols=[1,2]) # <-- ("cols" not an actual param)
# desired outcome:
pd.DataFrame(poly2.fit_transform(X))
0 1 2 3 4 5 'animal'
0 1 0 1 0 0 1 'dog'
1 1 2 3 4 6 9 'dog'
2 1 4 5 16 20 25 'cat'
当使用Pipeline功能组合一系列长的特征生成和模型训练代码时,这将特别有用。
一种选择是使用自己的变压器(Michelle Fullwood的great example),但我认为其他人之前会偶然发现这个用例。
答案 0 :(得分:6)
PolynomialFeatures没有指定要应用哪些数据列的参数,因此将其放入管道并期望工作并不简单。
更通用的方法是,您可以使用FeatureUnion并使用其他管道为数据框中的每个功能指定转换器。
一个简单的例子可能是:
from sklearn.pipeline import FeatureUnion
from sklearn.preprocessing import PolynomialFeatures, OneHotEncoder, LabelEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
X = pd.DataFrame({'cat_var': ['a', 'b', 'c'], 'num_var': [1, 2, 3]})
class ColumnExtractor(object):
def __init__(self, columns=None):
self.columns = columns
def fit(self, X, y=None):
return self
def transform(self, X):
X_cols = X[self.columns]
return X_cols
pipeline = Pipeline([
('features', FeatureUnion([
('num_var', Pipeline([
('extract', ColumnExtractor(columns=['num_var'])),
('poly', PolynomialFeatures(degree=2))
])),
('cat_var', Pipeline([
('extract', ColumnExtractor(columns=['cat_var'])),
('le', LabelEncoder()),
('ohe', OneHotEncoder()),
]))
])),
('estimator', LogisticRegression())
])
答案 1 :(得分:3)
是的,请查看sklearn-pandas
这应该有效(应该有一个更优雅的解决方案,但现在无法测试):
from sklearn.preprocessing import PolynomialFeatures
from sklearn_pandas import DataFrameMapper
X2.columns = ['col0', 'col1', 'col2', 'col3', 'col4', 'col5', 'animal']
mapper = DataFrameMapper([
('col0', PolynomialFeatures(2)),
('col1', PolynomialFeatures(2)),
('col2', PolynomialFeatures(2)),
('col3', PolynomialFeatures(2)),
('col4', PolynomialFeatures(2)),
('col5', PolynomialFeatures(2)),
('Animal', None)])
X3 = mapper.fit_transform(X2)
答案 2 :(得分:3)
回应彭君黄的回答 - 这种方法很棒但实施却有问题。 (这应该是一个评论,但它有点长。另外,没有足够的cookie。)
我尝试使用代码并遇到了一些问题。在愚弄了一下之后,我找到了原始问题的以下答案。 主要问题是ColumnExtractor需要从BaseEstimator和TransformerMixin继承,将其转换为可与其他sklearn工具一起使用的估算器。
我的示例数据显示了两个数值变量和一个分类变量。
我使用pd.get_dummies进行单热编码以保持管道的一点点
简单。此外,我遗漏了管道的最后一个阶段(估算器),因为我们没有y
数据适合;重点是分别显示选择,处理和加入。
享受。
微米。
import pandas as pd
import numpy as np
from sklearn.pipeline import FeatureUnion
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import Pipeline
from sklearn.base import BaseEstimator, TransformerMixin
X = pd.DataFrame({'cat': ['a', 'b', 'c'], 'n1': [1, 2, 3], 'n2':[5, 7, 9] })
cat n1 n2
0 a 1 5
1 b 2 7
2 c 3 9
# original version had class ColumnExtractor(object)
# estimators need to inherit from these classes to play nicely with others
class ColumnExtractor(BaseEstimator, TransformerMixin):
def __init__(self, columns=None):
self.columns = columns
def fit(self, X, y=None):
return self
def transform(self, X):
X_cols = X[self.columns]
return X_cols
# Using pandas get dummies to make pipeline a bit simpler by
# avoiding one-hot and label encoder.
# Build the pipeline from a FeatureUnion that processes
# numerical and one-hot encoded separately.
# FeatureUnion puts them back together when it's done.
pipe2nvars = Pipeline([
('features', FeatureUnion([('num',
Pipeline([('extract',
ColumnExtractor(columns=['n1', 'n2'])),
('poly',
PolynomialFeatures()) ])),
('cat_var',
ColumnExtractor(columns=['cat_b','cat_c']))])
)])
# now show it working...
for p in range(1, 4):
pipe2nvars.set_params(features__num__poly__degree=p)
res = pipe2nvars.fit_transform(pd.get_dummies(X, drop_first=True))
print('polynomial degree: {}; shape: {}'.format(p, res.shape))
print(res)
polynomial degree: 1; shape: (3, 5)
[[1. 1. 5. 0. 0.]
[1. 2. 7. 1. 0.]
[1. 3. 9. 0. 1.]]
polynomial degree: 2; shape: (3, 8)
[[ 1. 1. 5. 1. 5. 25. 0. 0.]
[ 1. 2. 7. 4. 14. 49. 1. 0.]
[ 1. 3. 9. 9. 27. 81. 0. 1.]]
polynomial degree: 3; shape: (3, 12)
[[ 1. 1. 5. 1. 5. 25. 1. 5. 25. 125. 0. 0.]
[ 1. 2. 7. 4. 14. 49. 8. 28. 98. 343. 1. 0.]
[ 1. 3. 9. 9. 27. 81. 27. 81. 243. 729. 0. 1.]]
答案 3 :(得分:0)
@plumbus_bouquet代码的改进:
from sklearn.preprocessing import PolynomialFeatures
from sklearn_pandas import DataFrameMapper
X2.columns = ['col0', 'col1', 'col2', 'animal']
degree = 2
mapper = DataFrameMapper(
(['col0', 'col1', 'col2'], PolynomialFeatures(degree))
)
X3 = mapper.fit_transform(X2)
另一种方法(我更喜欢)是使用sklearn.compose中的ColumnTransformer 。我发现它很容易在管道中使用。
要选择列,您有多种方法。一些方法是:
请参见example here。