我是 TensorFlow的新手。我查找了使用tensorflow实现多层感知器的示例,但我只在MNIST图像上获得示例 数据集,除了MNIST,我能够使用相同的优化和成本函数构建神经网络模型,并训练数字格式的数据,意味着,我可以使用张量流训练我自己的数字数据集。
是否有培训新数据集的示例?。
答案 0 :(得分:3)
最后我明白了。使用具有张量流,numpy,matplotlib包的单层感知器构建,训练和最小化人工神经网络的成本/损失。 数据以数组的形式使用的MNIST。 这是代码。
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
learning_rate = 0.0008
training_epochs = 2000
display_step = 50
# taking input as array from numpy package and converting it into tensor
inputX = np.array([[ 2, 3],
[ 1, 3]])
inputY = np.array([[ 2, 3],
[ 1, 3]])
x = tf.placeholder(tf.float32, [None, 2])
y_ = tf.placeholder(tf.float32, [None, 2])
W = tf.Variable([[0.0,0.0],[0.0,0.0]])
b = tf.Variable([0.0,0.0])
layer1 = tf.add(tf.matmul(x, W), b)
y = tf.nn.softmax(layer1)
cost = tf.reduce_sum(tf.pow(y_-y,2))
optimizer =tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
avg_set = []
epoch_set = []
for i in range(training_epochs):
sess.run(optimizer, feed_dict = {x: inputX, y_:inputY})
#log training
if i % display_step == 0:
cc = sess.run(cost, feed_dict = {x: inputX, y_:inputY})
#check what it thinks when you give it the input data
print(sess.run(y, feed_dict = {x:inputX}))
print("Training step:", '%04d' % (i), "cost=", "{:.9f}".format(cc))
avg_set.append(cc)
epoch_set.append(i + 1)
print("Optimization Finished!")
training_cost = sess.run(cost, feed_dict = {x: inputX, y_: inputY})
print("Training cost = ", training_cost, "\nW=", sess.run(W),
"\nb=", sess.run(b))
plt.plot(epoch_set,avg_set,'o',label = 'SLP Training phase')
plt.ylabel('cost')
plt.xlabel('epochs')
plt.legend()
plt.show()
稍后通过添加隐藏层,它也可以使用Multi Layer Perceptron
实现答案 1 :(得分:0)
修改TensorFlow multilayer perceptron example很简单。您需要在此处指定输入和输出大小
dotnet publish -c Release -r win-x86
最后,您必须更改训练周期代码以使用数据集。我无法帮助你,因为我对你的数据集一无所知。