张量形状误差:必须是等级2但是等级3

时间:2017-08-01 12:17:23

标签: python machine-learning tensorflow neural-network deep-learning

我无法搜索可以帮助我构建文本序列(功能)分类器的文档,研究或博客。我所拥有的文本序列包含网络日志。

我正在使用TensorFlow构建GRU模型,并使用SVM作为分类函数。我对张量形状有困难。它说MultiRNNCell。我用于训练神经网络的generating the parquet file数据。

我的项目的目标是使用此GRU-SVM模型在Here is a sample上进行入侵检测。数据集有23个功能和一个标签(如果网络中有入侵或没有入侵)。

network = []
for index in range(NLAYERS):
    network.append(tf.contrib.rnn.GRUCell(CELLSIZE))

注意:我之所以建立import data import numpy as np import os import tensorflow as tf BATCH_SIZE = 200 CELLSIZE = 512 NLAYERS = 3 SVMC = 1 learning_rate = 0.01 TRAIN_PATH = '/home/darth/GitHub Projects/gru_svm/dataset/train/6' def main(): examples, labels, keys = data.input_pipeline(path=TRAIN_PATH, batch_size=BATCH_SIZE, num_epochs=1) seqlen = examples.shape[1] x = tf.placeholder(shape=[None, seqlen, 1], dtype=tf.float32, name='x') y_input = tf.placeholder(shape=[None], dtype=tf.int32, name='y_input') y = tf.one_hot(y_input, 2, dtype=tf.float32, name='y') Hin = tf.placeholder(shape=[None, CELLSIZE*NLAYERS], dtype=tf.float32, name='Hin') network = [] for index in range(NLAYERS): network.append(tf.contrib.rnn.GRUCell(CELLSIZE)) mcell = tf.contrib.rnn.MultiRNNCell(network, state_is_tuple=False) Hr, H = tf.nn.dynamic_rnn(mcell, x, initial_state=Hin, dtype=tf.float32) Hf = tf.transpose(Hr, [1, 0, 2]) last = tf.gather(Hf, int(Hf.get_shape()[0]) - 1) weight = tf.Variable(tf.truncated_normal([CELLSIZE, 2], stddev=0.01), tf.float32, name='weights') bias = tf.Variable(tf.constant(0.1, shape=[2]), name='bias') logits = tf.matmul(last, weight) + bias regularization_loss = 0.5 * tf.reduce_sum(tf.square(weight)) hinge_loss = tf.reduce_sum(tf.maximum(tf.zeros([BATCH_SIZE, 1]), 1 - y * logits)) loss = regularization_loss + SVMC * hinge_loss train_step = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(loss) init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()) with tf.Session() as sess: sess.run(init_op) train_loss = 0 coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) try: for index in range(100): example_batch, label_batch, key_batch = sess.run([examples, labels, keys]) _, train_loss_ = sess.run([train_step, loss], feed_dict = { x : example_batch[..., np.newaxis], y_input : label_batch, Hin : np.zeros([BATCH_SIZE, CELLSIZE * NLAYERS]) }) train_loss += train_loss_ print('[{}] loss : {}'.format(index, (train_loss / 1000))) print('Weights : {}'.format(sess.run(weight))) print('Biases : {}'.format(sess.run(bias))) train_loss = 0 except tf.errors.OutOfRangeError: print('EOF reached.') except KeyboardInterrupt: print('Interrupted by user at {}'.format(index)) finally: coord.request_stop() coord.join(threads) main() 的原因(下面分隔的代码段)是因为我遇到了与此Kyoto University's honeypot system intrusion detection dataset类似的错误。

simplified

提前感谢您的回复!

更新08/01/2017 根据@ jdehesa的sugestions改进了源代码:

function map(arr, callback){
  let newArr = []
  for(let i = 0; i < arr.length; i++){
    newArr.push(callback(arr[i], i));
  }
  return newArr;
}

mapped = map(["Zero", "One", "Two"], function(el, i){ return i });
console.log(mapped)

我的下一步是验证我得到的结果是否正确。

1 个答案:

答案 0 :(得分:1)

问题在于:

logits = tf.matmul(x, weight) + bias

我认为你的意思是:

logits = tf.matmul(last, weight) + bias