我已经开发了用于多标签分类的文本模型。 OneVsRestClassifier LinearSVC模型使用sklearns Pipeline
和FeatureUnion
进行模型准备。
主要输入功能包括一个名为response
的文本列,以及一个由t1_prob
-t5_prob
预测的5个可能标签的5个主题概率(从以前的LDA主题模型生成) 。管道中还有其他功能创建步骤,用于生成TfidfVectorizer
。
我最终用ItemSelector调用了每一列,并对这些主题概率列分别执行了5次ArrayCaster(有关功能定义,请参见下面的代码)。是否有更好的方法使用FeatureUnion选择管道中的多个列? (因此我不必做5次)
我想知道是否有必要复制topic1_feature
-topic5_feature
代码,或者是否可以更简洁地选择多个列?
我要输入的数据是Pandas dataFrame:
id response label_1 label_2 label3 label_4 label_5 t1_prob t2_prob t3_prob t4_prob t5_prob
1 Text from response... 0.0 0.0 0.0 0.0 0.0 0.0 0.0625 0.0625 0.1875 0.0625 0.1250
2 Text to model with... 0.0 0.0 0.0 0.0 0.0 0.0 0.1333 0.1333 0.0667 0.0667 0.0667
3 Text to work with ... 0.0 0.0 0.0 0.0 0.0 0.0 0.1111 0.0938 0.0393 0.0198 0.2759
4 Free text comments ... 0.0 0.0 1.0 1.0 0.0 0.0 0.2162 0.1104 0.0341 0.0847 0.0559
x_train为response
,并包含5个主题概率列(t1_prob,t2_prob,t3_prob,t4_prob,t5_prob)。
y_train是5个label
列,我在它们上面调用了.values
以返回数据帧的numpy表示形式。 (label_1,label_2,label3,label_4,label_5)
示例数据框:
import pandas as pd
column_headers = ["id", "response",
"label_1", "label_2", "label3", "label_4", "label_5",
"t1_prob", "t2_prob", "t3_prob", "t4_prob", "t5_prob"]
input_data = [
[1, "Text from response",0.0,0.0,1.0,0.0,0.0,0.0625,0.0625,0.1875,0.0625,0.1250],
[2, "Text to model with",0.0,0.0,0.0,0.0,0.0,0.1333,0.1333,0.0667,0.0667,0.0667],
[3, "Text to work with",0.0,0.0,0.0,0.0,0.0,0.1111,0.0938,0.0393,0.0198,0.2759],
[4, "Free text comments",0.0,0.0,1.0,1.0,1.0,0.2162,0.1104,0.0341,0.0847,0.0559]
]
df = pd.DataFrame(input_data, columns = column_headers)
df = df.set_index('id')
df
我认为我的实现有点麻烦,因为FeatureUnion在组合它们时只能处理二维数组,因此其他任何类型(如DataFrame)对我来说都是成问题的。但是,此示例有效-我只是在寻找改进它并使它更干燥的方法。
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
class ItemSelector(BaseEstimator, TransformerMixin):
def __init__(self, column):
self.column = column
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
return X[self.column]
class ArrayCaster(BaseEstimator, TransformerMixin):
def fit(self, x, y=None):
return self
def transform(self, data):
return np.transpose(np.matrix(data))
def basic_text_model(trainX, testX, trainY, testY, classLabels, plotPath):
'''OneVsRestClassifier for multi-label prediction'''
pipeline = Pipeline([
('features', FeatureUnion([
('topic1_feature', Pipeline([
('selector', ItemSelector(column='t1_prob')),
('caster', ArrayCaster())
])),
('topic2_feature', Pipeline([
('selector', ItemSelector(column='t2_prob')),
('caster', ArrayCaster())
])),
('topic3_feature', Pipeline([
('selector', ItemSelector(column='t3_prob')),
('caster', ArrayCaster())
])),
('topic4_feature', Pipeline([
('selector', ItemSelector(column='t4_prob')),
('caster', ArrayCaster())
])),
('topic5_feature', Pipeline([
('selector', ItemSelector(column='t5_prob')),
('caster', ArrayCaster())
])),
('word_features', Pipeline([
('vect', CountVectorizer(analyzer="word", stop_words='english')),
('tfidf', TfidfTransformer(use_idf = True)),
])),
])),
('clf', OneVsRestClassifier(svm.LinearSVC(random_state=random_state)))
])
# Fit the model
pipeline.fit(trainX, trainY)
predicted = pipeline.predict(testX)
我从这个answer开始将ArrayCaster合并到流程中。
答案 0 :(得分:1)
我使用了受{Marcus V针对此FunctionTransformer解决方案的启发question得出的答案。修改后的管道更加简洁。
from sklearn.preprocessing import FunctionTransformer
get_numeric_data = FunctionTransformer(lambda x: x[['t1_prob', 't2_prob', 't3_prob', 't4_prob', 't5_prob']], validate=False)
pipeline = Pipeline([
('features', FeatureUnion([
('numeric_features', Pipeline([
('selector', get_numeric_data)
])),
('word_features', Pipeline([
('vect', CountVectorizer(analyzer="word", stop_words='english')),
('tfidf', TfidfTransformer(use_idf = True)),
])),
])),
('clf', OneVsRestClassifier(svm.LinearSVC(random_state=random_state)))
])