我正在尝试使用一维卷积神经网络将大腿和小腿上的x,y和z加速度计和陀螺仪数据归类为步行或跑步(以及最终的其他活动)数据(6个功能),并扫过不同的参数。
当我训练和评估数据模型时,有时我会获得100%的准确度,而有时我会获得60%的准确度(有些参数组合导致〜99%)。这些模型看起来并不像训练v验证损失曲线那样过拟合,但我发现有时候我得到100.000%,然后又得到如此低的值,这很奇怪。
要查看情况是否总是如此,我训练并评估每个模型15次,并取均值和标准差。大多数参数组合都表现出这种行为,而有些则没有。
例如(末尾的值是针对看不见数据的准确性):
>Standardize=False Filter=16 Kernel=3 Batch=32: #1: 59.701
>Standardize=False Filter=16 Kernel=3 Batch=32: #2: 100.000
>Standardize=False Filter=16 Kernel=3 Batch=32: #3: 100.000
>Standardize=False Filter=16 Kernel=3 Batch=32: #4: 99.975
>Standardize=False Filter=16 Kernel=3 Batch=32: #5: 100.000
>Standardize=False Filter=16 Kernel=3 Batch=32: #6: 40.299
>Standardize=False Filter=16 Kernel=3 Batch=32: #7: 100.000
>Standardize=False Filter=16 Kernel=3 Batch=32: #8: 59.701
>Standardize=False Filter=16 Kernel=3 Batch=32: #9: 59.701
>Standardize=False Filter=16 Kernel=3 Batch=32: #10: 100.000
>Standardize=False Filter=16 Kernel=3 Batch=32: #11: 59.701
>Standardize=False Filter=16 Kernel=3 Batch=32: #12: 100.000
>Standardize=False Filter=16 Kernel=3 Batch=32: #13: 59.701
>Standardize=False Filter=16 Kernel=3 Batch=32: #14: 59.701
>Standardize=False Filter=16 Kernel=3 Batch=32: #15: 99.975
这是我正在使用的模型:
model = models.Sequential()
model.add(Conv1D(filters=filt, kernel_size=kernel, activation='relu',
input_shape=(n_timesteps, n_features)))
model.add(Conv1D(filters=filt, kernel_size=kernel, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(layers.Flatten())
model.add(layers.Dense(100, activation='relu'))
model.add(layers.Dense(n_activities, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit network
history = model.fit(X_train, y_train, validation_data=(X_val, y_val),
epochs=epochs, batch_size=batch_size, verbose=verbose)
# evaluate model
_, accuracy = model.evaluate(X_val, y_val, batch_size=batch_size, verbose=0)
我扫描了过滤器大小(16,32),内核大小(3,5)和批处理大小(16,32),并且还检查了标准化和非标准化数据。
我还在大约10,000多个数据窗口上进行培训
这是否表示我的模型有误/无法正常工作?如果是这样,对此有什么解决办法吗?
答案 0 :(得分:1)