我正在根据组值在训练和测试集中拆分一些数据。如何才能获得平衡的数据?
为了解决二进制分类任务,我有100个样本,每个样本都有一个唯一的ID,一个主题和一个标签(1或0)。
为了避免在人员识别任务中退化,我需要训练和测试集不能同时出现同一主题。
受试者的数量少于样本的数量(57),某些受试者仅出现在一个样品中,而其他的受试者则具有相同或不同的标签。
我可以使用sklearn中的gropKfold简单地做到这一点,但我希望我的数据保持平衡(或至少接近)
我尝试使用以下代码:
n_shuffles = 2
group_k_fold = GroupKFold(n_splits=5)
for i in range(n_shuffles):
X_shuffled, y_shuffled, groups_shuffled = shuffle(idx, labels, subjects, random_state=i)
splits = group_k_fold.split(X_shuffled, y_shuffled, groups_shuffled)
for train_idx, val_idx in splits:
X = perezDataFrame.loc[perezDataFrame['ID'].isin(X_shuffled[train_idx]),AU_names].values
X = preprocessing.normalize(X, norm='l2')
y = perezDataFrame.loc[perezDataFrame['ID'].isin(X_shuffled[train_idx]),'label'].values
XTest = perezDataFrame.loc[perezDataFrame['ID'].isin(X_shuffled[val_idx]),AU_names].values
XTest = preprocessing.normalize(XTest, norm='l2')
yTest = perezDataFrame.loc[perezDataFrame['ID'].isin(X_shuffled[val_idx]),'label'].values
其中idx,主题和标签分别是ID,主题和标签的列表。
但是数据非常不平衡。
我也尝试过:
for i in range(5):
GSP = GroupShuffleSplit(n_splits =10, test_size =0.20, train_size=0.80 ,random_state=i)
splits = GSP.split(idx, labels, subjects)
for train_idx, test_idx in splits:
.....
但这不是Kfold,所以我不能保证同一样本只保留一倍。
答案 0 :(得分:1)
因此,我认为没有默认的scikit-learn交叉验证器可以实现您想要的功能,但是应该可以创建一个。
我的处理方法是遍历所有主题并贪婪地将它们分配到测试集中,具体取决于分配能改善折叠大小以及折叠中的目标分类率的程度。
我已经生成了一些类似于您的问题的示例数据:
import pandas as pd
import numpy as np
n_subjects = 50
n_observations = 100
n_positives = 15
positive_subjects = np.random.randint(0, n_subjects, n_positives)
data = pd.DataFrame({
'subject': np.random.randint(0, n_subjects, n_observations)
}).assign(
target=lambda d: d['subject'].isin(positive_subjects)
)
subject target
0 14 False
1 12 True
2 10 False
3 36 False
4 21 False
然后我们可以使用以下代码段进行分配
def target_rate_improvements(data, subjects, extra):
"""Compute the improvement in squared difference between the positive rate in each fold vs the overall positive rate in the dataset"""
target_rate = data['target'].mean()
rate_without_extra = data.loc[lambda d: d['subject'].isin(subjects), 'target'].mean()
rate_with_extra = data.loc[lambda d: d['subject'].isin(subjects + [extra]), 'target'].mean()
rate_without_extra = 0 if np.isnan(rate_without_extra) else rate_without_extra
return (rate_without_extra - target_rate)**2 - (rate_with_extra - target_rate)**2
def size_improvement(data, subjects, n_folds):
"""compute the improvement in squared difference between the number of observations in each fold vs the expected number of observations"""
target_obs_per_fold = len(data) / n_folds
return [(target_obs_per_fold - len(data.loc[lambda d: d['subject'].isin(subject)])) ** 2 for subject in subjects.values()]
n_folds = 5
test_subjects_per_fold = {fold: [] for fold in range(n_folds)}
subjects_to_assign = list(range(100))
for subject in data['subject'].unique():
target_rate_improvement = np.array([target_rate_improvements(data, test_subjects_per_fold[fold], subject) for fold in range(n_folds)])
size_improvements = np.array(size_improvement(data, test_subjects_per_fold, n_folds)) * 0.001
best_fold = np.argmax(target_rate_improvement +size_improvements)
test_subjects_per_fold[best_fold] += [subject]
并验证它是否可以按我们期望的那样工作:
for fold, subjects in test_subjects_per_fold.items():
print('-'*80)
print(f'for fold {fold}')
test_data = data.loc[lambda d: d['subject'].isin(subjects)]
train_data = data.loc[lambda d: ~d['subject'].isin(subjects)]
print('train - pos rate:', train_data['target'].mean(), 'size:', len(train_data))
print('test - pos rate:', test_data['target'].mean(), 'size:', len(test_data))
--------------------------------------------------------------------------------
for fold 0
train - pos rate: 0.3 size: 80
test - pos rate: 0.3 size: 20
--------------------------------------------------------------------------------
for fold 1
train - pos rate: 0.3037974683544304 size: 79
test - pos rate: 0.2857142857142857 size: 21
--------------------------------------------------------------------------------
for fold 2
train - pos rate: 0.2962962962962963 size: 81
test - pos rate: 0.3157894736842105 size: 19
--------------------------------------------------------------------------------
for fold 3
train - pos rate: 0.3 size: 80
test - pos rate: 0.3 size: 20
--------------------------------------------------------------------------------
for fold 4
train - pos rate: 0.3 size: 80
test - pos rate: 0.3 size: 20
可变的命名可以在这里和那里得到改善,但是总的来说,我会说这种方法可以解决您的问题。
在scikit-learn兼容的交叉验证器中实现此功能看起来像这样,尽管这需要更多的重新设计。
class StratifiedGroupKFold(_BaseKFold):
...
def _iter_test_indices(self, X, y, groups):
test_subjects_per_fold = {fold: [] for fold in range(n_folds)}
for subject in data['subject'].unique():
target_rate_improvement = np.array([self.target_rate_improvements(X, y, test_subjects_per_fold[fold], subject) for fold in range(self.n_folds)])
size_improvements = np.array(self.size_improvement(X, y, test_subjects_per_fold, self.n_folds)) * 0.001
best_fold = np.argmax(target_rate_improvement +size_improvements)
test_subjects_per_fold[best_fold] += [subject]
for subjects in test_subjects_per_fold.values():
yield data['subject'].isin(subjects)], ~data['subject'].isin(subjects)]
答案 1 :(得分:0)
我认为您应该使用StratifiedKFold。
在文档中,您可以看到此句子(带红色标记) https://scikit-learn.org/stable/auto_examples/model_selection/plot_cv_indices.html#sphx-glr-auto-examples-model-selection-plot-cv-indices-py
让您成功