如何在TensorFlow中实现此神经网络中剩余的“for循环”

时间:2018-04-30 13:36:51

标签: python for-loop tensorflow neural-network gradient-descent

我正在尝试在TensforFlow中使用神经网络。数据集只是花瓣的长度和宽度,输出可以是1/0,具体取决于类型:

var jsonDataArray = [{
        endOffset: "50",
        startOffset: "25",
        type: "Education"
    },
    {
        endOffset: "67",
        startOffset: "65",
        type: "address"
    },
    {
        endOffset: "65",
        startOffset: "59",
        type: "project"
    },
    {
        endOffset: "80",
        startOffset: "70",
        type:"fullname"
    }];

var newJSON = [];
jsonDataArray.forEach((obj)=>{
  var  tempObj = {
     endOffset: obj.endOffset,
     type: obj.type
  };
  newJSON.push(tempObj);
  tempObj = {
     startOffset: obj.startOffset,
     type: obj.type
  };
  newJSON.push(tempObj);
});
newJSON.sort(function (a, b) {
  var aVal = a.startOffset?a.startOffset:a.endOffset;
  var bVal = b.startOffset?b.startOffset:b.endOffset;
  return aVal - bVal;
});
console.log(newJSON);

到目前为止,我的代码看起来像这样:

定义变量

x = [[3,1.5],
     [2,1],
     [4,1.5],
     [3,1],
     [3.5,0.5],
     [2,0.5],
     [5.5,1],
     [1,1]]

y = [1,
     0,
     1,
     0,
     1,
     0,
     1,
     0]

我的问题是我如何安排我的'for'循环,以便它立即获取整个数据集并将其与实际输出进行比较?张量流上的mnist数据集使用softmax交叉熵,您可以在函数的参数中指定实际输出和预测输出。但是,在这个简单的数据集中,如何在剩余的for循环中复制相同的内容,以便代码抓取所有数据进行预测并将其与实际输出进行比较?另外请指出我的变量形状是否有任何问题谢谢。

2 个答案:

答案 0 :(得分:0)

你知道,你可以使用tflearn。节省了大量时间和挫折=)

import tflearn
from tflearn.layers.core import fully_connected,input_data
from tflearn.layers.estimator import regression

model = input_data(shape=[None,4,1])
model = fully_connected(model,1,activation='sigmoid')
model = regression(model)
model = tflearn.DNN(model)
model.fit(X_inputs=trainX,Y_targets=trainY,n_epoch=20,
          validation_set=(testX,testY),show_metric=True)

答案 1 :(得分:0)

管理以解决我的目标:

import tensorflow as tf
import numpy as np

train_X = np.asarray([[3,1.5],[2,1],[4,1.5],[3,1],[3.5,0.5],[2,0.5],[5.5,1],[1,1]])
train_Y = np.asarray([[1],[0],[1],[0],[1],[0],[1],[0]])

x = tf.placeholder("float",[None, 2])
y = tf.placeholder("float",[None, 1])

W = tf.Variable(tf.zeros([2, 1]))
b = tf.Variable(tf.zeros([1, 1]))

activation = tf.nn.sigmoid(tf.matmul(x, W)+b)
cost = tf.reduce_mean(tf.square(activation - y))
optimizer = tf.train.GradientDescentOptimizer(.2).minimize(cost)

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    for i in range(50000):
        train_data = sess.run(optimizer, feed_dict={x: train_X, y: train_Y})

    result = sess.run(activation, feed_dict={x:train_X})
    print(result)