使用Python中的tensorflow测试训练好的神经网络

时间:2018-01-01 22:06:33

标签: python tensorflow neural-network

我有一个excel文件,其中包含以下列:

Disp    force   Set-1   Set-2
0        0         0      0
0.000100011 10.85980847 10.79430294 10.89428425
0.000200021 21.71961695 21.58860588 21.7885685
0.000350037 38.00932966 37.780056   38.12999725

为我的神经网络建模以获得上述数据(考虑前2列作为输入,接下来2列作为我的输出),我尝试在python中编写一个简单的前馈神经网络:

import tensorflow as tf
import numpy as np
import pandas as pd
#import matplotlib.pyplot as plt
rng = np.random

from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
#################################################
# In[180]:

# Parameters
learning_rate = 0.01
training_epochs = 20
display_step = 1

# Read data from CSV
a = r'C:\Downloads\international-financial-statistics\DataUpdated.csv'
df = pd.read_csv(a,encoding = "ISO-8859-1")

# Seperating out dependent & independent variable

train_x = df[['Disp','force']]
train_y = df[['Set-1','Set-2']]
#############################################added by me
trainx=StandardScaler().fit_transform(train_x)
trainy=StandardScaler().fit_transform(train_y)
#### after training during the testing...test set should be scaled separately and then once when you get the output you need to rescale it back

n_input = 2
n_classes = 2
n_hidden_1 = 40
n_hidden_2 = 40
n_samples = 2100

# tf Graph Input
#Inserts a placeholder for a tensor that will be always fed.
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])

# Set model weights
W_h1 = tf.Variable(tf.random_normal([n_input, n_hidden_1]))
W_h2 = tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2]))
W_out = tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
b_h1 = tf.Variable(tf.zeros([n_hidden_1]))
b_h2 = tf.Variable(tf.zeros([n_hidden_2]))
b_out = tf.Variable(tf.zeros([n_classes]))


# Construct a linear model
layer_1 = tf.add(tf.matmul(x, W_h1), b_h1)
layer_1 = tf.nn.relu(layer_1)
layer_2 = tf.add(tf.matmul(layer_1, W_h2), b_h2)
layer_2 = tf.nn.relu(layer_2)
out_layer = tf.matmul(layer_2, W_out) + b_out

# Mean squared error
cost = tf.reduce_mean(tf.pow(out_layer-y, 2))/(2*n_samples)
# Gradient descent
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)

# Initializing the variables
init = tf.global_variables_initializer()

# Launch the graph
with tf.Session() as sess:
    sess.run(init)

    # Fit all training data
    for epoch in range(training_epochs):
        _, c = sess.run([optimizer, cost], feed_dict={x: trainx,y: trainy})

        # Display logs per epoch step
        if (epoch+1) % display_step == 0:
            print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c))

    print("Optimization Finished!")
    training_cost = sess.run(cost, feed_dict={x: trainx,y: trainy})
    print(training_cost)

    best = sess.run([out_layer], feed_dict={x: np.array([[0.0001,10.85981]])})
    print(best)

我想知道用于测试神经网络准确性的正确方法。例如:我想传递输入0.000100011; 10.85980847并检索这些输入的两个相关输出。 我试着写它,但它给了我不好的结果(你可以看看我上面的代码,特别是最后2行)

先谢谢。

1 个答案:

答案 0 :(得分:0)

由于您的输出值是连续的,因此它是一个回归问题。因此,您可以使用均方根误差作为衡量错误率的指标,并使用交叉验证来评估模型,以便可以在看不见的数据上测试模型。此处显示了示例示例https://github.com/naveenkambham/MachineLearningModels/blob/master/NeuralNetwork.py