这是我正在处理的网络图,数据是表格形式和结构化的,
在左侧,我们有一些功能是连续的功能,在右侧,我们可以有'N'个修饰符。每个修饰符都有 modifier_type (属于分类)和一些 statistics (是连续特征)。
如果它只是一个修饰符,那么下面的代码就可以正常工作!
import keras.backend as K
from keras.models import Model
from keras.layers import Input, Embedding, concatenate
from keras.layers import Dense, GlobalMaxPooling1D, Reshape
from keras.optimizers import Adam
K.clear_session()
# Using embeddings for categorical features
modifier_type_embedding_in=[]
modifier_type_embedding_out=[]
# sample categorical features
categorical_features = ['modifier_type']
modifier_input_ = Input(shape=(1,), name='modifier_type_in')
# Let's assume 10 unique type of modifiers and let's have embedding dimension as 6
modifier_output_ = Embedding(input_dim=10, output_dim=6, name='modifier_type')(modifier_input_)
modifier_output_ = Reshape(target_shape=(6,))(modifier_output_)
modifier_type_embedding_in.append(modifier_input_)
modifier_type_embedding_out.append(modifier_output_)
# sample continuous features
statistics = ['duration']
statistics_inputs =[Input(shape=(len(statistics),), name='statistics')] # Input(shape=(1,))
# sample continuous features
abilities = ['buyback_cost', 'cooldown', 'number_of_deaths', 'ability', 'teleport', 'team', 'level', 'max_mana', 'intelligence']
abilities_inputs=[Input(shape=(len(abilities),), name='abilities')] # Input(shape=(9,))
concat = concatenate(modifier_type_embedding_out + statistics_inputs)
FC_relu = Dense(128, activation='relu', name='fc_relu_1')(concat)
FC_relu = Dense(128, activation='relu', name='fc_relu_2')(FC_relu)
model = concatenate(abilities_inputs + [FC_relu])
model = Dense(64, activation='relu', name='fc_relu_3')(model)
model_out = Dense(1, activation='sigmoid', name='fc_sigmoid')(model)
model_in = abilities_inputs + modifier_type_embedding_in + statistics_inputs
model = Model(inputs=model_in, outputs=model_out)
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=2e-05, decay=1e-3), metrics=['accuracy'])
但是,当编译数量为'N'的修饰符时,出现以下错误,而以下是我在代码中所做的更改,
modifier_input_ = Input(shape=(None, 1,), name='modifier_type_in')
statistics_inputs =[Input(shape=(None, len(statistics),), name='statistics')] # Input(shape=(None, 1,))
FC_relu = Dense(128, activation='relu', name='fc_relu_2')(FC_relu)
max_pool = GlobalMaxPooling1D()(FC_relu)
model = concatenate(abilities_inputs + [max_pool])
这就是我得到的,
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-7703088b1d24> in <module>
22 abilities_inputs=[Input(shape=(len(abilities),), name='abilities')] # Input(shape=(9,))
23
---> 24 concat = concatenate(modifier_type_embedding_out + statistics_inputs)
25 FC_relu = Dense(128, activation='relu', name='fc_relu_1')(concat)
26 FC_relu = Dense(128, activation='relu', name='fc_relu_2')(FC_relu)
e:\Miniconda3\lib\site-packages\keras\layers\merge.py in concatenate(inputs, axis, **kwargs)
647 A tensor, the concatenation of the inputs alongside axis `axis`.
648 """
--> 649 return Concatenate(axis=axis, **kwargs)(inputs)
650
651
e:\Miniconda3\lib\site-packages\keras\engine\base_layer.py in __call__(self, inputs, **kwargs)
423 'You can build it manually via: '
424 '`layer.build(batch_input_shape)`')
--> 425 self.build(unpack_singleton(input_shapes))
426 self.built = True
427
e:\Miniconda3\lib\site-packages\keras\layers\merge.py in build(self, input_shape)
360 'inputs with matching shapes '
361 'except for the concat axis. '
--> 362 'Got inputs shapes: %s' % (input_shape))
363
364 def _merge_function(self, inputs):
ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 6), (None, None, 1)]
我如何在旨在接受可变输入长度特征的神经网络中使用嵌入层?
答案 0 :(得分:0)
答案是
import keras.backend as K
from keras.models import Model
from keras.layers import Input, Embedding, concatenate
from keras.layers import Dense, GlobalMaxPooling1D, Reshape
from keras.optimizers import Adam
K.clear_session()
# Using embeddings for categorical features
modifier_type_embedding_in=[]
modifier_type_embedding_out=[]
# sample categorical features
categorical_features = ['modifier_type']
modifier_input_ = Input(shape=(None,), name='modifier_type_in')
# Let's assume 10 unique type of modifiers and let's have embedding dimension as 6
modifier_output_ = Embedding(input_dim=10, output_dim=6, name='modifier_type')(modifier_input_)
modifier_type_embedding_in.append(modifier_input_)
modifier_type_embedding_out.append(modifier_output_)
# sample continuous features
statistics = ['duration']
statistics_inputs =[Input(shape=(None, len(statistics),), name='statistics')] # Input(shape=(1,))
# sample continuous features
abilities = ['buyback_cost', 'cooldown', 'number_of_deaths', 'ability', 'teleport', 'team', 'level', 'max_mana', 'intelligence']
abilities_inputs=[Input(shape=(len(abilities),), name='abilities')] # Input(shape=(9,))
concat = concatenate(modifier_type_embedding_out + statistics_inputs)
FC_relu = Dense(128, activation='relu', name='fc_relu_1')(concat)
FC_relu = Dense(128, activation='relu', name='fc_relu_2')(FC_relu)
max_pool = GlobalMaxPooling1D()(FC_relu)
model = concatenate(abilities_inputs + [max_pool])
model = Dense(64, activation='relu', name='fc_relu_3')(model)
model_out = Dense(1, activation='sigmoid', name='fc_sigmoid')(model)
model_in = abilities_inputs + modifier_type_embedding_in + statistics_inputs
model = Model(inputs=model_in, outputs=model_out)
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=2e-05, decay=1e-3), metrics=['accuracy'])