我正在尝试在Tensorflow中训练网络,该网络基于3个功能将网络流量分为2类。
这是我的数据集:
[[[80, 0.0, 1],
[80, 10.00087022781372, 1],
[80, 20.00172996520996, 1],
[80, 30.002599954605103, 1],
[80, 40.003480195999146, 1],
[80, 50.00434994697571, 1]],
[[80, 0.0, 1],
[80, 10.00091004371643, 1],
[80, 20.00171995162964, 1],
[80, 30.00259017944336, 1],
[80, 40.00345993041992, 1],
[80, 50.00433015823364, 1]],
[[372, 0.0, 1],
[372, 10.000890016555786, 1],
[372, 24.0006902217865, 1],
[372, 45.0004301071167, 1]],...]
这是我的网络:
# Parameters
units_in_cell = 128
n_epochs = 3
learning_rate =0.001
batch_size = 1
n_features = 3
n_output = 1
#placeholder
x = tf.placeholder(tf.float32,[None , None, n_features], name = 'inputs')
y = tf.placeholder(tf.float32, name = 'labels')
#cells and initial state
cell = tf.contrib.rnn.OutputProjectionWrapper(tf.contrib.rnn.BasicLSTMCell(num_units=units_in_cell, activation=tf.nn.relu), output_size=n_output)
outputs, state = tf.nn.dynamic_rnn(cell, x, dtype=tf.float32)
# An additional dense layer before the final predictions
y_pred = tf.layers.dense(inputs=outputs, units=1, activation=None)
# Predictions (one dimensional)
y_pred = tf.squeeze(y_pred)
#loss
loss = tf.reduce_mean(tf.square(outputs-y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as sess:
output_pred = []
tf.global_variables_initializer().run()
for epoch in range(n_epochs):
print("Epoch: ", epoch+1,end=' \n ')
for step in range(steps_per_epoch):
for (a, b) in zip(train_x, train_y):
X_values = np.array(a).reshape(1, np.array(a).shape[0], np.array(a).shape[1])
y_values = b
sess.run(train, feed_dict={x: X_values, y: y_values})
if step % 10 == 0:
c = sess.run(loss, feed_dict={x: X_values, y: y_values})
print("Step: ", step+1, "loss = ", c,)
print ("Opimization Finished")
training_loss = sess.run(loss, feed_dict={x: X_values, y: y_values})
print ("Training loss= ", training_loss)
save_path = saver.save(sess, "./models/tensorflow1.ckpt")
运行网络后,出现此错误:
Epoch: 1
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-95172c547171> in <module>
15 #y_values = next(iter_train_y)
16
---> 17 sess.run(train, feed_dict={x: X_values, y: y_values})
18
19 if step % 10 == 0:
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
898 try:
899 result = self._run(None, fetches, feed_dict, options_ptr,
--> 900 run_metadata_ptr)
901 if run_metadata:
902 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1102 feed_handles[subfeed_t] = subfeed_val
1103 else:
-> 1104 np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
1105
1106 if (not is_tensor_handle_feed and
C:\ProgramData\Anaconda3\lib\site-packages\numpy\core\numeric.py in asarray(a, dtype, order)
499
500 """
--> 501 return array(a, dtype, copy=False, order=order)
502
503
TypeError: float() argument must be a string or a number, not 'set'
如果我使用与数据集相似的虚拟数据来运行网络,那么它将正常工作。
有人可以告诉我可能是什么问题吗?
提前感谢:)