我正在研究区分伪造签名和原始签名的计算机视觉问题。网络的输入是两个图像
为此,我从VGG 16的“ block3_pool”层(形状(None,28,28,256))中提取了特征,并创建了一个层来计算两个图像的256个滤镜的绝对差的平方根< / p>
但是,网络无法学习,并且对于每个时期都给出相同的训练和验证错误,调整学习速率不起作用,更改体系结构也不起作用。网络的输入是两个图像:锚点和形状为(224,224,3)的数据
我已经在非常小的数据集上对其进行了训练,预计它会过拟合,但是即使在修改数据集大小后网络也不会学习
数据的格式为 每个用户有24个原始签名和24个伪造签名。从原始集合中随机选择一个将获得锚定数据,并在原始伪造和伪造签名之间随机选择将获得数据数组。 我有一个锚点数组(同一样本的24张图像)和数据(24张不同原始和伪造样本的总图像),锚点的形状和一个用户的数据数组是:(24,224,224,3)
#model hyper parameters ,using sgd optimizer
epochs=50
learning_rate=0.1
decay=learning_rate/epochs
batch_size=8
keep_prob=0.8
#This is the function for lambda layer 'Layer_distance'
def root_diff(x):
diff=K.sqrt(K.sum(K.abs(x[:,:,:,:,0]-x[:,:,:,:,1]),axis=(1,2)))
return diff
#This creates an instance of a pre trained VGG-16 model
def base_model(input_dims=(224,224,3),output_dims=128):
base_model=VGG16(include_top=False,weights='imagenet',input_shape=input_dims)
for layers in base_model.layers:
layers.trainable=False
x=base_model.get_layer('block3_pool').output
model=Model(inputs=base_model.input,outputs=x)
return model
def siamese_model(anchor,data,label,anchor_valid,data_valid,label_valid,input_shape=(224,224,3)):
anchor_input=Input(input_shape)
data_input=Input(input_shape)
#----------------------------Model begins from here-------------------------------------------------------#
model_resnet=base_model(input_dims=input_shape)
encodings_anchor=model_resnet(anchor_input)
encodings_data=model_resnet(data_input)
layer_expand_dims=Lambda(lambda x:K.expand_dims(x,axis=4))
anchor_expanded=layer_expand_dims(encodings_anchor)
data_expanded=layer_expand_dims(encodings_data)
encodings=concatenate([anchor_expanded,data_expanded],axis=4) #gives the shape as (None,28,28,256,2)
Layer_distance=Lambda(root_diff)(encodings) #Should give a vector of (256)
dense_1=Dense(256,activation=None,kernel_initializer='glorot_uniform',bias_initializer='zeros')(Layer_distance)
prediction=Dense(1,activation='sigmoid',kernel_initializer='glorot_uniform')(dense_1)
# Connect the inputs with the outputs
siamese_net = Model(inputs=[anchor_input,data_input],outputs=prediction)
print(siamese_net.summary())
for layer in siamese_net.layers:
print("Input shape: "+str(layer.input_shape)+". Output shape: "+str(layer.output_shape))
sgd= optimizers.SGD(lr=learning_rate, decay=1e-9, momentum=0.9, nesterov=True)
siamese_net.compile(loss='binary_crossentropy', optimizer=sgd,metrics=['accuracy'])
history=siamese_net.fit(x=[anchor,data],y=label,batch_size=batch_size,epochs=epochs
,validation_data=([anchor_valid,data_valid],label_valid))
模型摘要(siamese_net)
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
model_1 (Model) (None, 28, 28, 256) 1735488 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
lambda_1 (Lambda) (None, 28, 28, 256, 0 model_1[1][0]
model_1[2][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 28, 28, 256, 0 lambda_1[0][0]
lambda_1[1][0]
__________________________________________________________________________________________________
lambda_2 (Lambda) (None, 256) 0 concatenate_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 256) 65792 lambda_2[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 1) 257 dense_1[0][0]
==================================================================================================
Total params: 1,801,537
Trainable params: 66,049
Non-trainable params: 1,735,488
__________________________________________________________________________________________________
训练结果
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Train on 48 samples, validate on 48 samples
Epoch 1/50
2019-04-21 06:10:00.354542: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally
48/48 [==============================] - 4s 90ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 2/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 3/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 4/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 5/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 6/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 7/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 8/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 9/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 10/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 11/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 12/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 13/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 14/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 15/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 16/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 17/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 18/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 19/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 20/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 21/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 22/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 23/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 24/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 25/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 26/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 27/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 28/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 29/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 30/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 31/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 32/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 33/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 34/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 35/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 36/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 37/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 38/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 39/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 40/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 41/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 42/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 43/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 44/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 45/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 46/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 47/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 48/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 49/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 50/50
48/48 [==============================] - 1s 19ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Saved model to disk
答案 0 :(得分:0)