如何根据参与者 ID 拆分训练/测试集?

时间:2021-05-12 17:00:57

标签: python machine-learning

我编写了以下代码来将我的数据集拆分为训练、验证和测试数据集。该数据集由来自 39 个参与者的多个音频文件组成。为了防止数据泄露,我需要确保每个参与者的音频文件都在测试/训练/验证数据集中(即在数据集拆分时不拆分)。我如何修改我的代码来做到这一点?

# create storage for train, validation, test sets and their indices
train_set,valid_set,test_set = [],[],[]
X_train,X_valid,X_test = [],[],[]
y_train,y_valid,y_test = [],[],[]

# convert waveforms to array for processing
waveforms = np.array(waveforms)

# process each diagnosis separately to make sure we builf balanced train/valid/test sets 
for diagnosis_num in range(len(diagnosis_dict)):
        
    # find all indices of a single unique diagnosis
    diagnosis_indices = [index for index, diagnosis in enumerate(diagnoses) if diagnosis==diagnosis_num]
    print(diagnosis_indices)

    # seed for reproducibility 
    np.random.seed(69)
    # shuffle indicies 
    diagnosis_indices = np.random.permutation(diagnosis_indices)

    # store dim (length) of the diagnosis list to make indices
    dim = len(diagnosis_indices)

    # store indices of training, validation and test sets in 80/10/10 proportion
    # train set is first 80%
    train_indices = diagnosis_indices[:int(0.8*dim)]
    # validation set is next 10% (between 80% and 90%)
    valid_indices = diagnosis_indices[int(0.8*dim):int(0.9*dim)]
    # test set is last 10% (between 90% - end/100%)
    test_indices = diagnosis_indices[int(0.9*dim):]

    # create train waveforms/labels sets
    X_train.append(waveforms[train_indices,:])
    y_train.append(np.array([diagnosis_num]*len(train_indices),dtype=np.int32))
    # create validation waveforms/labels sets
    X_valid.append(waveforms[valid_indices,:])
    y_valid.append(np.array([diagnosis_num]*len(valid_indices),dtype=np.int32))
    # create test waveforms/labels sets
    X_test.append(waveforms[test_indices,:])
    y_test.append(np.array([diagnosis_num]*len(test_indices),dtype=np.int32))

    # store indices for each emotion set to verify uniqueness between sets 
    train_set.append(train_indices)
    valid_set.append(valid_indices)
    test_set.append(test_indices)

# concatenate, in order, all waveforms back into one array 
X_train = np.concatenate(X_train,axis=0)
X_valid = np.concatenate(X_valid,axis=0)
X_test = np.concatenate(X_test,axis=0)

# concatenate, in order, all diagnoses back into one array 
y_train = np.concatenate(y_train,axis=0)
y_valid = np.concatenate(y_valid,axis=0)
y_test = np.concatenate(y_test,axis=0)

# combine and store indices for all diagnoses train, validation, test sets to verify uniqueness of sets
train_set = np.concatenate(train_set,axis=0)
valid_set = np.concatenate(valid_set,axis=0)
test_set = np.concatenate(test_set,axis=0)

# check shape of each set
print(f'Training waveforms:{X_train.shape}, y_train:{y_train.shape}')
print(f'Validation waveforms:{X_valid.shape}, y_valid:{y_valid.shape}')
print(f'Test waveforms:{X_test.shape}, y_test:{y_test.shape}')

# make sure train, validation, test sets have no overlap/are unique
# get all unique indices across all sets and how many times each index appears (count)
uniques, count = np.unique(np.concatenate([train_set,test_set,valid_set],axis=0), return_counts=True)

# if each index appears just once, and we have 1440 such unique indices, then all sets are unique
if sum(count==1) == len(diagnoses):
    print(f'\nSets are unique: {sum(count==1)} samples out of {len(diagnoses)} are unique')
else:
    print(f'\nSets are NOT unique: {sum(count==1)} samples out of {len(diagnoses)} are unique')    

0 个答案:

没有答案
相关问题