我一直在尝试制作一个简单的2层神经网络。我研究了tensorflow api和官方教程,我制作了一个分层模型,但在神经网络中遇到了麻烦。以下是导致错误的代码部分:
package LectureLoops;
import java.text.NumberFormat;
import java.util.Scanner;
public class ValidatingInput {
static Scanner sc = new Scanner(System.in);
static NumberFormat cf = NumberFormat.getCurrencyInstance();
public static void main(String[] args) {
int bank = 10000;
int bet;
int max = 9000;
int deposit;
cf = NumberFormat.getCurrencyInstance();
do {
System.out.print("Please make a deposit: ");
deposit = sc.nextInt();
if ((deposit > bank))
System.out.println("Insufficient funds, please enter a valid amount");
}// end of do
while (deposit > bank);
System.out.println("You have successfully made a deposit of " + cf.format(deposit));
bank = bank - deposit;
System.out.println("Your new bank balance is " + cf.format(bank));
do {
System.out.print("Enter your bet: ");
bet = sc.nextInt();
if ((bet > deposit))
System.out.println("Insufficient funds please enter a valid amount or add more funds" + "\nYour current allowance is: " + cf.format(deposit));
if ((bet > max) || (bet <= 0))
System.out.println("Maximum bet is " + cf.format(max) + " you entered " + cf.format(bet) + "\nPlease enter a valid amount");
}// end of do
while ((bet <= 0) || (bet > max || bet > deposit));
System.out.print("Thank you for your bet of " + cf.format(bet) + " Your money is good here!\n");
deposit = deposit - bet;
System.out.println("You have " + cf.format(deposit) + " left to spend");
}// psvm end
}// validatinginput end
错误是:
with graph.as_default():
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.int32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
weights0 = tf.Variable(tf.truncated_normal([image_size**2, num_labels]))
biases0 = tf.Variable(tf.zeros([num_labels]))
hidden1 = tf.nn.relu(tf.matmul(tf_test_dataset, weights0) + biases0)
weights1 = tf.Variable(tf.truncated_normal([num_labels, image_size * image_size]))
biases1 = tf.Variable(tf.zeros([image_size**2]))
hidden2 = tf.nn.relu(tf.matmul(hidden1, weights1) + biases1)
logits = tf.matmul(hidden2, weights0) + biases0
labels = tf.expand_dims(tf_train_labels, 1)
indices = tf.expand_dims(tf.range(0, batch_size), 1)
concated = tf.concat(1, [indices, tf.cast(labels,tf.int32)])
onehot_labels = tf.sparse_to_dense(concated, tf.pack([batch_size, num_labels]), 1.0, 0.0)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, onehot_labels))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset,weights0) + biases0),weights1)+biases1),weights0)+biases0)
test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset,weights0) + biases0),weights1)+biases1),weights0)+biases0)
以下是完整代码:http://pastebin.com/sX7RqbAf
我使用过TensorFlow和Python 2.7。我对神经网络和机器学习都很陌生,所以请原谅我任何错误,提前谢谢。
答案 0 :(得分:2)
在你的例子中:
tf_train_labels
的形状为[batch_size, num_labels]
labels
的形状为[batch_size, 1, num_labels]
indices
的形状为[batch_size, 1]
因此,当你写:
concated = tf.concat(1, [indices, tf.cast(labels,tf.int32)])
它会引发错误,因为labels
和indices
的第三个维度不同。 labels
的第三维大小为num_labels
(大概为10),indices
没有第三维。