张量流中多输出的精度问题

时间:2016-09-25 02:50:29

标签: python tensorflow

我有一个系统,可能在一个时刻有65个不同的属性。我想用dnn预测它们。我的输入是系统的属性(79个二进制输入),输出是65个离散状态,可以是0,1,2。 因此,对于每个输入,我可能有一个输出向量,如[0,0,1,2,...,2,1,0,1,1]。为了使用dnn算法,我想要65个softmax输出,每个输出有三个输出。因此,dnn的输出是大小为y的向量[,65*3]

我在tensorflow中使用完全连接的网络实现了此问题。 但是,我在获得每个解决方案的准确性方面存在问题。对于每个样本和给定的y_,可以通过以下方式获得准确度:

correct_predct = tf.reduce_sum(tf.cast([tf.equal( tf.argmax(y_[:,i*3:(i+1)*3],0) , tf.argmax(y[:,i*3:(i+1)*3],0)) for i in range(65)],tf.float32))

accuracy = tf.reduce_mean(tf.scalar_mul(1/65.0,correct_predct))

但是,它不起作用,因为y_y的定义方式。

这是我的代码:

import tensorflow as tf
import numpy as np
import scipy.io as sio
from tensorflow.python.training import queue_runner
tf.logging.set_verbosity(tf.logging.FATAL)

sess = tf.InteractiveSession()

maxiter = 50000
display = 100
decay_rate = 0.9
starter_learning_rate = 0.001
power = 0.75
l2lambda = .01
init_momentum = 0.9
decay_step = 5000

nnodes1 = 350
nnodes2 = 100
batch_size = 50
var = 2.0/(67+195)

print decay_rate,starter_learning_rate,power,l2lambda,init_momentum,decay_step,nnodes1,nnodes2,batch_size

result_mat = sio.loadmat('binarySysFuncProf.mat')
feature_mat = sio.loadmat('binaryDurSamples.mat')

result = result_mat['binarySysFuncProf']
feature = feature_mat['binaryDurSamples']

train_size=750000
test_size=250000
train_feature = feature[0:train_size,:]
train_output = result[0:train_size,:]

test_feature = feature[train_size + 1 : train_size + test_size , :]
test_output = result[train_size + 1 : train_size + test_size , :]

# import the data
#from tensorflow.examples.tutorials.mnist import input_data
# placeholders, which are the training data
x = tf.placeholder(tf.float64, shape=[None,79])
y_ = tf.placeholder(tf.float64, shape=[None,195])
learning_rate = tf.placeholder(tf.float64, shape=[])

# define the variables
W1 = tf.Variable(np.random.normal(0,var,(79,nnodes1)))
b1 = tf.Variable(np.random.normal(0,var,nnodes1))

W2 = tf.Variable(np.random.normal(0,var,(nnodes1,nnodes2)))
b2 = tf.Variable(np.random.normal(0,var,nnodes2))

W3 = tf.Variable(np.random.normal(0,var,(nnodes2,1)))
b3 = tf.Variable(np.random.normal(0,var,1))

# Passing global_step to minimize() will increment it at each step.
global_step = tf.Variable(0, trainable=False)
momentum = tf.Variable(init_momentum, trainable=False)

# prediction function (just one layer)                                                                                                          
layer1 = tf.nn.sigmoid(tf.matmul(x,W1) + b1)
layer2 = tf.nn.sigmoid(tf.matmul(layer1,W2) + b2)
y = tf.nn.softmax(tf.matmul(layer2,W3) + b3)

cost_function =tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y)))
#cost_function = tf.sum([tf.reduce_mean((-tf.reduce_sum(y_[:,i*3:(i+1)*3])*tf.log(y[:,i*3:(i+1)*3]))) for i in range(65)])
correct_predct = tf.reduce_sum(tf.cast([tf.equal( tf.argmax(y_[:,i*3:(i+1)*3],0) , tf.argmax(y[:,i*3:(i+1)*3],0)) for i in range(65)],tf.float32))
accuracy = tf.reduce_mean(tf.scalar_mul(1/65.0,correct_predct))

l2regularization = tf.reduce_sum(tf.square(W1)) + tf.reduce_sum(tf.square(b1)) + tf.reduce_sum(tf.square(W2)) + tf.reduce_sum(tf.square(b2)) + tf.reduce_sum(tf.square(W3)) + tf.reduce_sum(tf.square(b3))

loss = (cost_function) + l2lambda*l2regularization

# define the learning_rate and its decaying procedure.
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,decay_step, decay_rate, staircase=True)

# define the training paramters and model, gradient model and feeding the function
#train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
train_step = tf.train.MomentumOptimizer(learning_rate,0.9).minimize(loss, global_step=global_step)

# initilize the variables                                                                                                                       
sess.run(tf.initialize_all_variables())

# Train the Model for 1000 times. by defining the batch number we determine that it is sgd
for i in range(maxiter):
  batch = np.random.randint(0,train_size,size=batch_size)
  train_step.run(feed_dict={x:train_feature[batch,:], y_:train_output[batch,:]})  
  if np.mod(i,display) == 0:
    train_loss = cost_function.eval(feed_dict={x: train_feature[0:train_size,:], y_: train_output[0:train_size,:]})
    test_loss = cost_function.eval(feed_dict={x: test_feature, y_: test_output})
    train_acc = 0#accuracy.eval(feed_dict={x: train_feature[0:train_size,:], y_: train_output[0:train_size,:]})
    test_acc = 0#accuracy.eval(feed_dict={x: test_feature, y_: test_output})
    print "Iter" , i, "lr" , learning_rate.eval() , "| Train loss" , train_loss , "| Test loss", test_loss  , "| Train Accu", train_acc , "| Test Accu", test_acc  ,"||W||",l2regularization.eval() , "lmbd*||W||", l2lambda*l2regularization.eval()

由于我将y_y作为列大小为195的矩阵,因此出现以下错误:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 555, in eval
    return _eval_using_default_session(self, feed_dict, self.graph, session)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3498, in _eval_using_default_session
    return session.run(tensors, feed_dict)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 372, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 625, in _run
    % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (750000, 195) for Tensor u'Placeholder_7:0', which has shape '(?, 3)'

感谢任何评论或帮助以获得准确性。

阿夫欣

1 个答案:

答案 0 :(得分:1)

你的意思是你有65个相互排斥的课程吗?

如果是这样,您正在进行多标签分类,wiki:https://en.wikipedia.org/wiki/Multi-label_classification

训练多标签分类器。您需要对类进行单热编码。我们假设你的输出张量有形状(-1,65),值1表示应该预测类。

你的输出层应该是&#34; sigmoid&#34;并使用类似&#34; binary_crossentropy&#34;作为损失函数。