我在stackoverlflow上搜索了此问题,但是answers都没有澄清我的问题。
我正在尝试进行单变量预测,并且在LSTM的结尾处手动编写了一个密集层。
weight = tf.Variable(tf.truncated_normal([config.lstm_size, config.input_size]))
bias = tf.Variable(tf.constant(0.1, shape=[config.input_size]))
prediction = tf.matmul(last, weight) + bias
然后我尝试将激活添加到结果中。
weight = tf.Variable(tf.truncated_normal([config.lstm_size, config.input_size]))
bias = tf.Variable(tf.constant(0.1, shape=[config.input_size]))
prediction = tf.nn.tanh(tf.matmul(last, weight) + bias)
问题:这与添加tf.layers.dense()或tf.contrib.layers.fully_connected()一样吗?
hidden = tf.layers.dense(last, units=1, activation=tf.nn.relu)
或
tf.contrib.layers.fully_connected(last, num_outputs=1, activation_fn=tf.nn.relu)
问题:如果我这样做:
hidden = tf.layers.dense(last, units=1, activation=tf.nn.relu)
prediction = tf.contrib.layers.fully_connected(hidden, num_outputs=1, activation_fn=tf.nn.relu)
这是否意味着我有两个致密层?
提前谢谢!