Tensorflow中的形状不匹配

时间:2018-02-21 22:44:36

标签: python-3.x tensorflow nltk tensor

我对TensorFlow相对较新,并试图在生产中实现我的第一个模型。模型经过良好的训练和测试,但是使用这种算法进入生产阶段,我发现它非常具有挑战性。谁能告诉我为什么我的评估线上出现以下错误?

ValueError: Cannot feed value of shape (1, 1095277) for Tensor 'input:0', which has shape '(?, 2912)'

我实施的代码是(我已经尝试了各种不同的方法来实现这一点):

哪个张量的长度为1x1095277?

def use_neural_network(input_data, lexicon,stopWords):
    x= tf.placeholder('float', shape=[None, 2912], name='input')
    y= tf.placeholder('float', name='output')

    #x = tf.Variable('float', [None, 2912]', name='input')
    #y = tf.Variable('float', name='output')

    hidden_1_layer = {'weights':tf.Variable(tf.random_normal([2912, 1])),'biases':tf.Variable(tf.random_normal([1]))}
    output_layer = {'weights':tf.Variable(tf.random_normal([1, 2])),'biases':tf.Variable(tf.random_normal([2])),}
    def neural_network_model(data):
        l1 = tf.add(tf.matmul(data,hidden_1_layer['weights']), hidden_1_layer['biases'])
        l1 = tf.nn.relu(l1)
        output = tf.matmul(l1,output_layer['weights']) + output_layer['biases']
        return output

    prediction = neural_network_model(x) 
    saver=tf.train.Saver()
    with tf.Session() as sess:
        saver.restore(sess,"model.ckpt")   
        lemmatizer = WordNetLemmatizer()
        current_words = word_tokenize(input_data.lower())
        current_words = [re.sub("[^a-zA-Z]"," ", i) for i in current_words]
        current_words = [re.sub("\s{1,10}"," ", i) for i in current_words]
        current_words = [i for i in current_words if i not in stopWords]   
        current_words = [lemmatizer.lemmatize(i) for i in current_words]
        features = np.zeros(len(lexicon))
        for word in current_words:
            if word.lower() in lexicon:
                index_value = lexicon.index(word.lower())
                features[index_value] += 1
                print(pd.Series(features).sum())
            features = np.array(list(features))
            result = (sess.run(tf.argmax(prediction.eval(feed_dict={x:[features]}),1)))
            if result[0] == 0:
                print('No:',input_data)
            elif result[0] == 1:
                print('Yes:',input_data)

with open('lexicon_1.pickle','rb') as f:
    lexicon = pickle.load(f)
stopWords = set(stopwords.words('english'))
use_neural_network('I do not understand the problem', lexicon, stopWords)

1 个答案:

答案 0 :(得分:1)

您的网络似乎需要[2912, 1]

定义的大小为hidden_1_layer的输入
hidden_1_layer = {'weights':tf.Variable(tf.random_normal([2912, 1])), ...

当您调用预测时,不要使用大小为[2912, 1]的输入调用它,而是使用等于词典长度的输入来调用它,其中(可能)包含1095277个数字。

features = np.zeros(len(lexicon))

此外,我怀疑您要将features数组包裹两次,首先使用features = np.array(list(features)),然后再使用x:[features]。对您的数据不完全自信,但这感觉不对。

就个人而言,我发现通过复制教程和修改行来学习是最容易的,而不是试图从头开始编写。