无法建立二元分类器

时间:2019-07-19 10:42:16

标签: python tensorflow machine-learning neural-network tensor

我不知道这是否非常合适,但是已经封锁了2天,所以我尝试了。 我遵循MNIST的Forwar神经网络模型,并尝试使其适应我的问题(二进制分类)。

2天前,我的张量 Y

的dtype出现问题

,我在这里问stackoverflow binary classification, xentropy mismatch , invalid argument ( Received a label value of 1 which is outside the valid range of [0, 1) ) 但没有得到任何回应,因此我通过使用tf.tofloat(y)转换张量Y找到了自己的“解决方案”。

现在,我在遇到了错误

InvalidArgumentError                      Traceback (most recent call last)
~\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args)
   1333     try:
-> 1334       return fn(*args)
   1335     except errors.OpError as e:

~\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
   1318       return self._call_tf_sessionrun(
-> 1319           options, feed_dict, fetch_list, target_list, run_metadata)
   1320 

~\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
   1406         self._session, options, feed_dict, fetch_list, target_list,
-> 1407         run_metadata)
   1408 

InvalidArgumentError: targets[0] is out of range
     [[{{node in_top_k_2/InTopKV2}}]]

目标是该函数的第二个参数: tf.nn.in_top_k(logits,y,1)

这是我的完整代码

如果有人可以帮助我并告诉我我的错误在哪里以及我做错了什么是因为我放弃了..

import tensorflow as tf
n_inputs = 28 
n_hidden1 = 15
n_hidden2 = 5
n_outputs = 1
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") # variable a qui on assignera values par feed_dict
y = tf.placeholder(tf.int32, shape=(None), name="y")   #None => any

def neuron_layer(X, n_neurons, name, activation=None):
    with tf.name_scope(name):
        n_inputs = int(X.shape[1])
        stddev = 2 / np.sqrt(n_inputs) 
        init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev) #matrice n_inputs x n_neurons values proche de 0    
        W = tf.Variable(init,name="kernel")  #weights random
        b = tf.Variable(tf.zeros([n_neurons]), name="bias")
        Z = tf.matmul(X, W) + b
        tf.cast(Z,tf.int32)
        if activation is not None:
            return activation(Z)
        else:
            return Z

hidden1 = neuron_layer(X, n_hidden1, name="hidden1",
                           activation=tf.nn.relu)
hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2",
                           activation=tf.nn.relu)
logits = neuron_layer(hidden2, n_outputs, name="outputs")

xentropy = tf.keras.backend.binary_crossentropy(tf.to_float(y),logits)  
loss = tf.reduce_mean(xentropy)
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits,y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))


init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 40
batch_size = 50

def shuffle_batch(X, y, batch_size):
    rnd_idx = np.random.permutation(len(X))
    n_batches = len(X) // batch_size
    for batch_idx in np.array_split(rnd_idx, n_batches):
        X_batch, y_batch = X[batch_idx], y[batch_idx]
        yield X_batch, y_batch

#until here, no errors ...
with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        for X_batch, y_batch in shuffle_batch(X_train, Y_train, batch_size):
            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
        acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
        acc_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})
        print(epoch, "Batch accuracy:", acc_batch, "Val accuracy:", acc_val)

    save_path = saver.save(sess, "./my_model_final.ckpt")

我真的需要一些帮助 ..谢谢!

0 个答案:

没有答案