ML分类中的SMOTE

时间:2018-07-12 07:04:24

标签: machine-learning scikit-learn classification

我正在使用sklearn在Jupyter中运行分类算法。 我想使用SMOTE,因为我的一个小组仅占其他两个小组的35%。因此,我想对该组(第1组)进行超采样,但我不知道如何对其进行集成。 (编辑:我知道SMOTE脚本,但是我想知道它在下面的脚本中适合的位置)。 帮助吗?

from sklearn.model_selection import train_test_split
    X_train, X_test, y_train, y_test = train_test_split (X, y, test_size = 0.1) 
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform (X_test)
from sklearn.svm import SVC
clf = SVC(kernel = 'linear')
clf.fit (X_train, y_train.ravel())
y_pred = clf.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix (y_test, y_pred)
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score (estimator = clf, X = X_train, y = y_train, cv = 10)
accuracies.mean()
accuracies.std()
from sklearn.model_selection import GridSearchCV
parameters = [{'C':[1, 10, 100], 'kernel':['linear']}, 
              {'C':[1, 10, 100], 
               'kernel':['rbf'], 
               'gamma': [0.05, 0.001, 0.005]}]
grid_search = GridSearchCV (estimator = clf, param_grid = parameters, scoring = 'accuracy', cv = 10)
grid_search = grid_search.fit (X_train,y_train)
best_accuracy =  grid_search.best_score_ 
print (best_accuracy)
best_parameters = grid_search.best_params_
print (best_parameters)

3 个答案:

答案 0 :(得分:2)

您可以像这样从SMOTE中使用imbalanced learn

from imblearn.over_sampling import SMOTE

sm = SMOTE(random_state=42)
X_balanced, y_balanced = sm.fit_sample(X, y) #where X and y are your original features and labels

然后分别使用X_balancedy_balanced作为您的Xy

答案 1 :(得分:1)

您必须对数据集进行SMOTE,并使用生成的平衡数据集进行建模训练。

因此,您必须像问题中未显示的代码那样加载数据,然后对其应用SMOTE。

就代码而言,可以这样做

@Override
public int executeUpdate(String sql) throws SQLException {
    execute(sql);
    return 0;
}

答案 2 :(得分:0)

您可以在进行SMOTE之前将数据分为训练和测试过程,以避免过度拟合。 正确的方法:仅对训练数据进行过采样。 https://beckernick.github.io/oversampling-modeling/