让我们考虑一个示例数据集,该数据集包含6列和10行。
在这3列中为数字,其余3列为类别变量。
类别列将转换为大小为10x3的多热编码数组。
我有我想预测的目标列也是分类变量,可以再次采用3个可能的值。该列是一个热编码的列。
现在,我想使用此多热编码数组作为嵌入层的输入。嵌入层应输出2个单位。
然后我想使用数据集中的3个数字列和嵌入层的2个输出单位,总共5个单位作为隐藏层的输入。
这是我被困住的地方。我不知道如何使用tensorflow keras桥接嵌入层和其他要素列,也不知道如何传递输入以嵌入层和其他2个单位。
我已经用谷歌搜索了。我尝试了以下代码,但仍然出现错误。 我想tf.keras软件包中没有Merge层。
在此方面的任何帮助将不胜感激。
import tensorflow as tf
from tensorflow import keras
import numpy as np
num_data = np.random.random(size=(10,3))
multi_hot_encode_data = np.random.randint(0,2, 30).reshape(10,3)
target = np.eye(3)[np.random.randint(0,3, 10)]
model = keras.Sequential()
model.add(keras.layers.Embedding(input_dim=multi_hot_encode_data.shape[1], output_dim=2))
model.add(keras.layers.Dense(3, activation=tf.nn.relu, input_shape=(num_data.shape[1],)))
model.add(keras.layers.Dense(3, activation=tf.nn.softmax)
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=keras.losses.categorical_crossentropy,
metrics=[keras.metrics.categorical_accuracy])
#model.fit([multi_hot_encode_data, num_data], target) # I get error here
我的网络结构将是
multi-hot-encode-input num_data_input
| |
| |
| |
embedding_layer |
| |
| |
\ /
\ /
dense_hidden_layer
|
|
output_layer
答案 0 :(得分:3)
此“合并”模式与顺序模型不兼容。我认为将功能性keras API与keras.Model
而不是keras.Sequential
(short explanation of main differences)结合使用会更容易:
import tensorflow as tf
from tensorflow import keras
import numpy as np
num_data = np.random.random(size=(10,3))
multi_hot_encode_data = np.random.randint(0,2, 30).reshape(10,3)
target = np.eye(3)[np.random.randint(0,3, 10)]
# Use Input layers, specify input shape (dimensions except first)
inp_multi_hot = keras.layers.Input(shape=(multi_hot_encode_data.shape[1],))
inp_num_data = keras.layers.Input(shape=(num_data.shape[1],))
# Bind nulti_hot to embedding layer
emb = keras.layers.Embedding(input_dim=multi_hot_encode_data.shape[1], output_dim=2)(inp_multi_hot)
# Also you need flatten embedded output of shape (?,3,2) to (?, 6) -
# otherwise it's not possible to concatenate it with inp_num_data
flatten = keras.layers.Flatten()(emb)
# Concatenate two layers
conc = keras.layers.Concatenate()([flatten, inp_num_data])
dense1 = keras.layers.Dense(3, activation=tf.nn.relu, )(conc)
# Creating output layer
out = keras.layers.Dense(3, activation=tf.nn.softmax)(dense1)
model = keras.Model(inputs=[inp_multi_hot, inp_num_data], outputs=out)
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=keras.losses.categorical_crossentropy,
metrics=[keras.metrics.categorical_accuracy])
model.summary
的输出:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_5 (InputLayer) (None, 3) 0
__________________________________________________________________________________________________
embedding_2 (Embedding) (None, 3, 2) 6 input_5[0][0]
__________________________________________________________________________________________________
flatten (Flatten) (None, 6) 0 embedding_2[0][0]
__________________________________________________________________________________________________
input_6 (InputLayer) (None, 3) 0
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 9) 0 flatten[0][0]
input_6[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 3) 30 concatenate_2[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 3) 12 dense[0][0]
==================================================================================================
Total params: 48
Trainable params: 48
Non-trainable params: 0
__________________________________________________________________________________________________
也很适合:
model.fit([multi_hot_encode_data, num_data], target)
Epoch 1/1
10/10 [==============================] - 0s 34ms/step - loss: 1.0623 - categorical_accuracy: 0.3000