深度学习中的堆叠与非堆叠架构

时间:2019-04-16 09:04:49

标签: machine-learning keras deep-learning keras-layer tf.keras

在Keras(或整个深度学习)中,stackingnon-stacking体系结构有什么区别,任何人都可以简单地说明non-stacking体系结构吗?

有很多示例和教程显示了如何在keras中堆叠图层,但堆叠的反面实际上并没有。

1 个答案:

答案 0 :(得分:1)

Deep neural networks are, by definition, stacks of neural networks (generally just called layers). You might think of it as a series of operations; a flowchart. For example, in object detection networks, series' of convolution layers (called feature pyramids) is used to extract regional features from images. Just like a flow chart, however, you can create branches, and move the data along however you prefer. Consider the following code snippets:

Just a stack.

input_layer = InputLayer(input_shape=(256,256,3))
x = Dense(666)(input_layer)
x = Dense(666)(x)
output_layer = Dense(1, activation='softmax')(x)

Something more interesting.

input_layer = Inputlayer(input_shape=(256,256,3))
x = Dense(666)(input_layer)

x_left = Dense(666)(x) # gets inputs from x
x_left = Dense(666)(x_left)

x_right = Dense(666)(x)
x_right = Dense(666)(x_right)

x = concatenate([x_left,x_right],axis=-1)

x = Dense(666)(x)
output_layer = Dense(1,activation='softmax')(x)

Does that answer your question at all?

Also, this graphic might help; this is a basic feature pyramid network layout I found on Google that does a decent job of depicting: feature pyramids

相关问题