用Keras编程多输入神经网络结构

时间:2018-06-15 13:00:10

标签: python tensorflow machine-learning neural-network keras

我想编写一个神经网络,我正在使用Keras库。一个数据集被分成随机数量的子集(1-100)。未使用的子集设置为零。一个子集由2 * 4 + 1个二进制输入值组成。架构应如下所示(应共享所有子集网络的权重):

.   InA1(4) InB1(4)   _
.       \     /        \
.     FCNA  FCNB       |
.         \ /          |
.      Concatinate     |
.          |           \ 100x (InA2, InB2, InC2, InA3, ...)
.         FCN          /
.InC(1)    |           |
.     \   /            |
.      \ /            _/
.  Concatinate
.       |
.      FCN
.       |
.     Out(1)

我已经查看了许多教程和示例,但我没有找到实现该网络的正确方法。这是我到目前为止所尝试的:

from keras import *

# define arrays for training set input
InA = []
InB = []
InC = []
for i in range(100):
    InA.append( Input(shape=4,), dtype='int32') )
    InB.append( Input(shape=4,), dtype='int32') )
    InC.append( Input(shape=1,), dtype='int32') )

NetA = Sequential()
NetA.add(Dense(4, input_shape(4,), activation="relu"))
NetA.add(Dense(3, activation="relu"))

NetB = Sequential()
NetB.add(Dense(4, input_shape(4,), activation="relu"))
NetB.add(Dense(3, activation="relu"))

NetMergeAB = Sequential()
NetMergeAB.add(Dense(1, input_shape=(3,2), activation="relu"))

# merging all subsample networks of InA, InB
MergeList = []
for i in range(100):
    NetConcat = Concatenate()( [NetA(InA[i]), NetB(InB[i])] )
    MergedNode = NetMergeAB(NetConcat)
    MergeList.append(MergedNode)
    MergeList.append(InC[i])

# merging also InC
FullConcat = Concatenate()(MergeList)

# put in fully connected net
ConcatNet = Sequential()
ConcatNet.add(Dense(10, input_shape(2, 100), activation="relu"))
ConcatNet.add(Dense(6, activation="relu"))
ConcatNet.add(Dense(4, activation="relu"))
ConcatNet.add(Dense(1, activation="relu"))

Output = ConcatNet(FullConcat)

问题是,要么我得到“没有Tensor”错误,要么根本不起作用。有人知道如何妥善解决这个问题吗?

3 个答案:

答案 0 :(得分:1)

您可以使用functional API轻松实现该网络架构,而根本不使用Sequential

InA, InB, InC = [Input(shape=(4,), dtype='int32') for _ in range(3)]

netA = Dense(4, activation="relu")(InA)
netA = Dense(3, activation="relu")(netA)

netB = Dense(4, activation="relu")(InB)
netB = Dense(3, activation="relu")(netB)

netMergeAB = concatenate([netA, netB])
netMergeAB = Dense(1, activation="relu")(netMergeAB)

fullConcat = concatenate([netMergeAB, InC])

out = Dense(10, activation="relu")(fullConcat)
out = Dense(6, activation="relu")(out)
out = Dense(4, activation="relu")(out)
out = Dense(1, activation="relu")(out)

model = Model([InA, InB, InC], out)

您可能需要稍微调整一下,但整体想法应该清楚。

答案 1 :(得分:0)

我已经更改了我的代码,我希望现在更清楚了:

NetMergeABC = []
for i in range(100):
    ActInA = Input(shape=(4,), dtype='int32')
    ActInB = Input(shape=(4,), dtype='int32')
    ActInC = Input(shape=(1,), dtype='int32')

    NetA = Dense(4, activation="relu")(ActInA)
    NetA = Dense(3, activation="relu")(NetA)

    NetB = Dense(4, activation="relu")(ActInB)
    NetB = Dense(3, activation="relu")(NetB)

    NetAB = concatenate([NetA, NetB])
    NetAB = Dense(1, activation="relu")(NetAB)
    NetMergeABC.append(NetAB)
    NetMergeABC.append(ActInC)

NetABC = concatenate(NetMergeABC)
NetABC = Dense(10, activation="relu")(NetABC)
NetABC = Dense(6, activation="relu")(NetABC)
NetABC = Dense(4, activation="relu")(NetABC)
NetABC = Dense(1, activation="relu")(NetABC)

现在的问题是,(我猜)NetA / B / C 1-100的权重并不共享。

答案 2 :(得分:0)

使用问题作者答案中的代码:

ActInA = Input(shape=(4,), dtype='int32')
ActInB = Input(shape=(4,), dtype='int32')
ActInC = Input(shape=(1,), dtype='int32')

NetA = Dense(4, activation="relu")(ActInA)
NetA = Dense(3, activation="relu")(NetA)

NetB = Dense(4, activation="relu")(ActInB)
NetB = Dense(3, activation="relu")(NetB)

NetAB = concatenate([NetA, NetB])
NetAB = Dense(1, activation="relu")(NetAB)

现在我们为这个网络子集构建一个模型:

mymodel = Model([ActInA, ActInB], NetAB)

现在是keras doc

的重要部分
  

所有模型都可以调用,就像图层一样

这意味着你可以简单地做这样的事情:

for i in range(100):
   NetMergeABC.append(mymodel([ActInA_array[i], ActInB_array[i]]))

因为您重复使用图层,所以会共享权重。