每个时期的验证准确性均为0,训练准确性不会改变

时间:2019-04-27 02:56:49

标签: machine-learning keras deep-learning data-analysis medical

我正在使用平衡的CT扫描数据集。我使用keras层(即squeezenet)构建我的自定义网络。

我正在此网络上训练我的模型,但是我的训练精度停滞不前,验证精度为0。我经历了一些代码,并建议将relu更改为tanh。但是没有任何改善。

我对新数据集的谓词始终是同一类。

def build_squeezenet(input_shape):

input_layer = Input(shape = input_shape) # ( 50,50,1)

out = Conv2D(96, kernel_size =(3, 3), activation='relu')(input_layer)
out = MaxPooling2D(pool_size=2, strides=None, padding='valid')(out)
# no mxpooling done since small images 
out = fire_module(out,squeeze =16 , expansion =64)
out = fire_module(out,squeeze = 16 , expansion=64)

out = fire_module(out,squeeze= 32,expansion = 128)
out = fire_module(out,squeeze= 32,expansion = 128)

out = fire_module(out , squeeze = 48 ,expansion=192)
out = fire_module(out,squeeze = 48 , expansion = 192)

out = fire_module(out,squeeze = 64 , expansion = 256)
out = fire_module(out,squeeze = 64 , expansion = 256)

out = Dropout(0.5)(out)

out = Conv2D(2,kernel_size=(1,1),padding='valid',activation='relu')(out)

out = GlobalAveragePooling2D()(out)
out = Dense(2, activation="softmax")(out)

model = Model(input_layer, out, name='squeezenet')
return model 

def fire_module(input_layer,squeeze = 16,expansion = 32):

 fire_sq = Conv2D(squeeze,1,1,activation= 'relu')(input_layer)
 fire_exp1 = Conv2D(expansion ,kernel_size=(1,1),activation='relu',padding='valid')(fire_sq)
fire_exp2 = Conv2D(expansion ,kernel_size=(3,3),activation='relu',padding='same')(fire_sq)
out = concatenate([fire_exp1,fire_exp2], axis=3)

return out


## training of the model 
model_dw = build_squeezenet(input_shape= (50,50,1))
sgd = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, 
amsgrad=False)

model_dw.compile(
optimizer=sgd, loss='categorical_crossentropy',
)
model_dw.summary()
model_dw.fit(x=images, 
        y=labels, 
        batch_size=128,
        epochs=10, 
        verbose=1,  
        validation_split=0.2,
        validation_data=(val_images,val_labels), 
        shuffle=True)


Train on 4156 samples, validate on 1040 samples
Epoch 1/10
4156/4156 [==============================] - 17s 4ms/step - loss: 0.6661 - 
acc: 0.6251 - val_loss: 0.9930 - val_acc: 0.0000e+00
Epoch 2/10
4156/4156 [==============================] - 14s 3ms/step - loss: 0.6620 - 
acc: 0.6251 - val_loss: 0.9439 - val_acc: 0.0000e+00
Epoch 3/10
4156/4156 [==============================] - 14s 3ms/step - loss: 0.6597 - 
acc: 0.6215 - val_loss: 0.9616 - val_acc: 0.0000e+00
Epoch 4/10
4156/4156 [==============================] - 14s 3ms/step - loss: 0.6443 - 
acc: 0.6251 - val_loss: 0.8945 - val_acc: 0.0000e+00
Epoch 5/10
4156/4156 [==============================] - 14s 3ms/step - loss: 0.6336 - 
acc: 0.6251 - val_loss: 0.9328 - val_acc: 0.0000e+00
Epoch 6/10
4156/4156 [==============================] - 14s 3ms/step - loss: 0.6384 - 
acc: 0.6251 - val_loss: 0.8628 - val_acc: 0.0000e+00
Epoch 7/10
4156/4156 [==============================] - 14s 3ms/step - loss: 0.6619 - 
acc: 0.6251 - val_loss: 0.9726 - val_acc: 0.0000e+00
Epoch 8/10
4156/4156 [==============================] - 14s 3ms/step - loss: 0.6608 - 
acc: 0.6251 - val_loss: 0.8752 - val_acc: 0.0000e+00
Epoch 9/10
4156/4156 [==============================] - 15s 3ms/step - loss: 0.6627 - 
acc: 0.6251 - val_loss: 1.0165 - val_acc: 0.0000e+00
Epoch 10/10
4156/4156 [==============================] - 15s 4ms/step - loss: 0.6622 - 
acc: 0.6251 - val_loss: 1.0553 - val_acc: 0.0000e+00

0 个答案:

没有答案