我正在尝试使用基于各种教程和信息的自己的数据编写我的第一个简单神经网络。我被困在我认为我准备好模型并且我正在尝试运行它的位置,但是当我想要找出每个时期中的成本函数变化时,它会返回NaN。
我的代码是:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
df = pd.read_excel("mydataset.xlsx")
# Preparing the dataset, doing some stuff here
df2 = df.dropna(subset=['wl'])
df2 = df2.sample(frac=1)
df2_X = df2[['param1','param2','param3','param4','param5','param6','param7']]
df2_y = df2[['numerical_result_param']]
# Spliting the dataset...
train_X, test_X, train_y, test_y = df2_X[:210], df2_X[210:], df2_y[:210], df2_y[210:]
# Creating model:
X = tf.placeholder("float", shape=[None, train_X.shape[1]])
y = tf.placeholder("float", shape=[None, train_y.shape[1]])
hl_size = 256 # Number of neurons in hidden layer
weights = {
'hl': tf.Variable(tf.random_normal([train_X.shape[1], hl_size])),
'out': tf.Variable(tf.random_normal([hl_size, train_y.shape[1]]))
}
biases = {
'hl': tf.Variable(tf.random_normal([hl_size])),
'out': tf.Variable(tf.random_normal([train_y.shape[1]]))
}
def multilayer_perceptron(x):
hl_layer = tf.add(tf.matmul(x, weights['hl']), biases['hl'])
hl_layer = tf.nn.relu(hl_layer)
out_layer = tf.matmul(hl_layer, weights['out']) + biases['out']
return out_layer
logits = multilayer_perceptron(X)
hm_epochs = 100 # Number of epochs
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=logits))
optimizer = tf.train.AdamOptimizer(0.01).minimize(cost) # Training optimizer
# Running the session
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
for epoch in range(hm_epochs):
epoch_loss = 0
_, c = sess.run([optimizer, cost], feed_dict={X: train_X, y: train_y})
epoch_loss += c
print('Epoch',epoch,'out of',hm_epochs,'loss:',epoch_loss)
它返回:
Epoch 0 out of 100 loss: nan
Epoch 1 out of 100 loss: nan
等
我很感激任何帮助和想法,我做错了什么!