我需要以不同的方式处理某些层,执行一些OR操作。我已经找到方法,创建了Lambda层并使用keras.backend.any
处理数据。我也在进行拆分,因为我需要使用逻辑OR来操作2个独立的组。
def logical_or_layer(x):
"""Processing an OR operation"""
import keras.backend
#normalized to 0,1
aux_array = keras.backend.sign(x)
aux_array = keras.backend.relu(aux_array)
# OR operation
aux_array = keras.backend.any(aux_array)
# casting back the True/False to 1,0
aux_array = keras.backend.cast(aux_array, dtype='float32')
return aux_array
然后我要像这样创建图层:
#this is the input tensor
inputs = Input(shape=(inputSize,))
#this is the Neurule layer
x = Dense(neurulesQt, activation='softsign')(inputs)
#after each neurule layer, the outputs need to be put into SIGNUM (-1 or 1)
x = Lambda(signumTransform, output_shape=lambda x:x, name='signumAfterNeurules')(x)
#separating into 2 (2 possible outputs)
layer_split0 = Lambda( lambda x: x[:, :end_output0], output_shape=(11, ), name='layer_split0')(x)
layer_split1 = Lambda( lambda x: x[:, start_output1:end_output1], output_shape=(9,), name='layer_split1')(x)
#this is the OR layer
y_0 = Lambda(logical_or_layer, output_shape=(1,), name='or0')(layer_split0)
y_1 = Lambda(logical_or_layer, output_shape=(1,), name='or1')(layer_split1)
仅供参考:神经元是根据IF-THEN规则创建的神经元,这是一个与使用TruthTable训练并代表专家知识的神经元一起工作的项目。
现在,当我尝试像这样将分割后的图层放回原位时
y = concatenate([y_0,y_1])
此错误出现:
ValueError: Can't concatenate scalars (use tf.stack instead) for 'concatenate_32/concat' (op: 'ConcatV2') with input shapes: [], [], [].
然后,我们按照建议使用tf.stack
:
y = keras.backend.stack([y_0, y_1])
然后,当我尝试时,它不再可用作模型的输出:
model = Model(inputs=inputs, outputs=y)
出现错误:
ValueError: Output tensors to a Model must be the output of a Keras `Layer` (thus holding past layer metadata). Found: Tensor("stack_14:0", shape=(2,), dtype=float32)
使用功能keras.backend.is_keras_tensor(y)
进行检查可以得到False
,但是使用其他所有层可以给我True
我应该如何正确连接它?
编辑:按照@today的答案,我能够创建一个新的Lambda层,其中stack
包裹在其中。但是输出已修改,应该为(None,2)
,并且它为(2,None,1)
,这是model.summary()
的输出:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_90 (InputLayer) (None, 24) 0
__________________________________________________________________________________________________
dense_90 (Dense) (None, 20) 500 input_90[0][0]
__________________________________________________________________________________________________
signumAfterNeurules (Lambda) (None, 20) 0 dense_90[0][0]
__________________________________________________________________________________________________
layer_split0 (Lambda) (None, 11) 0 signumAfterNeurules[0][0]
__________________________________________________________________________________________________
layer_split1 (Lambda) (None, 9) 0 signumAfterNeurules[0][0]
__________________________________________________________________________________________________
or0 (Lambda) (None, 1) 0 layer_split0[0][0]
__________________________________________________________________________________________________
or1 (Lambda) (None, 1) 0 layer_split1[0][0]
__________________________________________________________________________________________________
output (Lambda) (2, None, 1) 0 or0[0][0]
or1[0][0]
==================================================================================================
Total params: 500
Trainable params: 0
Non-trainable params: 500
__________________________________________________________________________________________________
我应该如何在各层中定义output_shape以使批处理仍保留在最后?
EDIT2 :按照@today的提示,我已经完成了以下操作:
#this is the input tensor
inputs = Input(shape=(inputSize,))
#this is the Neurule layer
x = Dense(neurulesQt, activation='softsign')(inputs)
#after each neuron layer, the outputs need to be put into SIGNUM (-1 or 1)
x = Lambda(signumTransform, output_shape=lambda x:x, name='signumAfterNeurules')(x)
#separating into 2 (2 possible outputs)
layer_split0 = Lambda( lambda x: x[:, :end_output0], output_shape=[11], name='layer_split0')(x)
layer_split1 = Lambda( lambda x: x[:, start_output1:end_output1], output_shape=[9], name='layer_split1')(x)
#this is the OR layer
y_0 = Lambda(logical_or_layer, output_shape=(1,), name='or0')(layer_split0)
y_1 = Lambda(logical_or_layer, output_shape=(1,), name='or1')(layer_split1)
y = Lambda(lambda x: K.stack([x[0], x[1]]),output_shape=(2,), name="output")([y_0, y_1])
现在model.summary()
似乎可以正常工作:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 24) 0
__________________________________________________________________________________________________
dense_1 (Dense) (None, 20) 500 input_1[0][0]
__________________________________________________________________________________________________
signumAfterNeurules (Lambda) (None, 20) 0 dense_1[0][0]
__________________________________________________________________________________________________
layer_split0 (Lambda) (None, 11) 0 signumAfterNeurules[0][0]
__________________________________________________________________________________________________
layer_split1 (Lambda) (None, 9) 0 signumAfterNeurules[0][0]
__________________________________________________________________________________________________
or0 (Lambda) (None, 1) 0 layer_split0[0][0]
__________________________________________________________________________________________________
or1 (Lambda) (None, 1) 0 layer_split1[0][0]
__________________________________________________________________________________________________
output (Lambda) (None, 2) 0 or0[0][0]
or1[0][0]
==================================================================================================
Total params: 500
Trainable params: 0
Non-trainable params: 500
__________________________________________________________________________________________________
答案 0 :(得分:2)
将K.stack
包裹在Lambda
层中,如下所示:
from keras import backend as K
y = Lambda(lambda x: K.stack([x[0], x[1]]))([y_0, y_1])