张量流和归一化中使用神经网络的线性回归

时间:2016-12-01 04:29:09

标签: regression normalization linear

我一直在关注本教程: https://blog.altoros.com/using-linear-regression-in-tensorflow.html

我知道有更好的方法进行线性回归,但我将其作为基础来进行多变量回归和多变量非线性回归以尝试理解TensorFlow。

  1. 根本没有规范化我的数据,我得到了GradientDescentOptimizer的'nan'。我很好奇为什么会这样。为什么规范化如此重要以至于模型根本不会运行?那么减去平均数并除以标准差会突然变得如此有效呢?

  2. 规范化数据后,我想恢复原始值。

  3. 每组数据似乎分别用自己的stddev和平均参数进行归一化:训练数据X,训练数据Y,测试数据X和测试数据Y.

    但是,当我在新数据上运行网络时,我假设在预测新值时,我必须再次规范化输入。在那种情况下,我如何理解预测的Y?我应该使用训练数据的标准偏差和意味着非标准化,还是新数据的标准偏差和平均值?当我给它标准化的训练数据时,我很困惑模型实际适合什么,以及如何解释W和b。我原本想要适合Y = mx + b,并想知道m和b到底是什么。

    因为我接受过训练数据培训,所以我认为我需要存储training_data的预标准化标准偏差,并使用此值对网络中的任何结果进行均值和非标准化。但事实上,当我使用新数据的标准偏差并意味着非标准化时,我会得到更合理的值。我不认为发布该代码是值得的,因为我只是对我需要做的事情有一个基本的误解,但这是我正在使用的基本代码。

    import tensorflow as tf
    import numpy
    import matplotlib.pyplot as plt
    
    # Train a data set
    
    
    # X: size data
    size_data = [ 2104,  1600,  2400,  1416,  3000,  1985,  1534,  1427,
      1380,  1494,  1940,  2000,  1890,  4478,  1268,  2300,
      1320,  1236,  2609,  3031,  1767,  1888,  1604,  1962,
      3890,  1100,  1458,  2526,  2200,  2637,  1839,  1000,
      2040,  3137,  1811,  1437,  1239,  2132,  4215,  2162,
      1664,  2238,  2567,  1200,   852,  1852,  1203 ]
    
    # Y: price data (set to 5x + 30)
    price_data = [5*c + 30 for c in size_data]
    
    
    size_data = numpy.asarray(size_data)
    price_data = numpy.asarray(price_data)
    # Test a data set
    
    size_data_test = [ 1600, 1494, 1236, 1100, 3137, 2238 ]
    price_data_test = [5*c + 30 for c in size_data_test]
    size_data_test = numpy.asarray(size_data_test)
    price_data_test = numpy.asarray(price_data_test)
    
    def normalize(array):
        std = array.std()
        mean = array.mean()
        return (array - mean) / std, std, mean
    
    # Normalize a data set
    
    size_data_n, size_data_n_std, size_data_n_mean = normalize(size_data)
    price_data_n, price_data_n_std, price_data_n_mean = normalize(price_data)
    
    size_data_test_n, size_data_test_n_std, size_data_test_n_mean = normalize(size_data_test)
    price_data_test_n, price_data_test_n_std, price_data_test_n_mean = normalize(price_data_test)
    
    # Display a plot
    #plt.plot(size_data, price_data, 'ro', label='Samples data')
    #plt.legend()
    #plt.draw()
    
    samples_number = price_data_n.size
    
    # TF graph input
    X = tf.placeholder("float")
    Y = tf.placeholder("float")
    
    # Create a model
    
    # Set model weights
    W = tf.Variable(numpy.random.randn(), name="weight")
    b = tf.Variable(numpy.random.randn(), name="bias")
    
    # Set parameters
    learning_rate = 0.05
    training_iteration = 200
    
    # Construct a linear model
    model = tf.add(tf.mul(X, W), b)
    
    # Minimize squared errors
    cost_function = tf.reduce_sum(tf.pow(model - Y, 2))/(2 * samples_number) #L2 loss
    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_function) #Gradient descent
    #optimizer = tf.train.AdagradOptimizer(learning_rate).minimize(cost_function)
    
    # Initialize variables
    init = tf.initialize_all_variables()
    
    # Launch a graph
    with tf.Session() as sess:
        sess.run(init)
    
        display_step = 20
        # Fit all training data
        for iteration in range(training_iteration):
            for (x, y) in zip(size_data_n, price_data_n):
                sess.run(optimizer, feed_dict={X: x, Y: y})
    
            # Display logs per iteration step
            if iteration % display_step == 0:
                print("Iteration:", '%04d' % (iteration + 1), "cost=", "{:.9f}".format(sess.run(cost_function, feed_dict={X:size_data_n, Y:price_data_n})),\
                "W=", sess.run(W), "b=", sess.run(b))
    
        tuning_cost = sess.run(cost_function, feed_dict={X: size_data_n, Y: price_data_n})
    
        print("Tuning completed:", "cost=", "{:.9f}".format(tuning_cost), "W=", sess.run(W), "b=", sess.run(b))
    
        # Validate a tuning model
    
        testing_cost = sess.run(cost_function, feed_dict={X: size_data_test_n, Y: price_data_test_n})
    
        print("Testing data cost:" , testing_cost)
    
        Y_predicted = sess.run(model, feed_dict={X: size_data_test_n, Y: price_data_test_n})
    
        print("%-20s%-20s%-20s%-20s" % ("Test X", "Actual", "Target", "Error(%)"))
    
        print('Normalized')
    
        for i in range(len(size_data_test_n)):
            err = 100.0 * abs(Y_predicted[i] - price_data_test_n[i]) / abs(price_data_test_n[i])
            print("%-20f%-20f%-20f%-20f" % (size_data_test_n[i], Y_predicted[i], price_data_test_n[i], err))
    
        print('Unnormalized')
    
        for i in range(len(size_data_test_n)):
            orig_size_data_test_i = size_data_test_n[i] * size_data_test_n_std + size_data_test_n_mean
            orig_price_data_test_i = price_data_test_n[i] * price_data_test_n_std + price_data_test_n_mean
    
            # ??? which one is correct for getting unnormalized predicted Y?
    
            #orig_Y_predicted_i = Y_predicted[i] * price_data_n_std + price_data_n_mean
            orig_Y_predicted_i = Y_predicted[i] * price_data_test_n_std + price_data_test_n_mean
    
            orig_err = 100.0 * abs(orig_Y_predicted_i - orig_price_data_test_i) / abs(orig_price_data_test_i)
            print("%-20f%-20f%-20f%-20f" % (orig_size_data_test_i, orig_Y_predicted_i, orig_price_data_test_i, orig_err))
    
        # Display a plot
        plt.figure()
    
        plt.plot(size_data, price_data, 'ro', label='Samples')
        plt.plot(size_data_test, price_data_test, 'go', label='Testing samples')
    
        plt.plot(size_data_test, (sess.run(W) * size_data_test_n + sess.run(b))*price_data_n_std  + price_data_n_mean  , label='Fitted test line')
    
        plt.legend()
    
        plt.show()
    

0 个答案:

没有答案