加州住房数据的神经网络

时间:2018-12-07 15:19:04

标签: python machine-learning keras neural-network

我试图编写一个在加州住房数据集上训练的神经网络的代码,该数据集是我从Aurelion Geron的GitHup获得的。 但是,当我运行代码时,网络并没有得到训练,损失= nan。 有人可以解释我做错了吗? 最好的问候,罗宾

csv文件的链接:https://github.com/ageron/handson-ml/tree/master/datasets/housing

我的代码:

import numpy
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense


# load dataset
df = pd.read_csv("housing.csv", delimiter=",", header=0)
# split into input (X) and output (Y) variables
Y = df["median_house_value"].values
X = df.drop("median_house_value", axis=1)
# Inland / Not Inland -> True / False = 1 / 0
X["ocean_proximity"] = X["ocean_proximity"]== "INLAND"
X=X.values


X= X.astype(float)
Y= Y.astype(float)

model = Sequential()
model.add(Dense(100, activation="relu", input_dim=9))
model.add(Dense(1, activation="linear"))
# Compile model
model.compile(loss="mean_squared_error", optimizer="adam")


model.fit(X, Y, epochs=50, batch_size=1000, verbose=1)

3 个答案:

答案 0 :(得分:1)

我发现了错误,“ total_bedrooms”列中缺少值

答案 1 :(得分:1)

您需要从数据中删除NaN 值。

快速浏览数据后,您还需要标准化数据(就像每次使用Neural Nets一样,以帮助收敛)。

为此,您可以使用 Standard Scaler ,Min-Max Scaler等。

答案 2 :(得分:1)

您的DataFrame中的

nan值导致了此现象。删除带有nan值的行并标准化您的数据:

df = df[~df.isnull().any(axis=1)]
df.iloc[:,:-1]=((df.iloc[:,:-1]-df.iloc[:,:-1].min())/(df.iloc[:,:-1].max()-df.iloc[:,:-1].min()))

您将获得:

Epoch 1/50
 1000/20433 [>.............................] - ETA: 3s - loss: 0.1732
20433/20433 [==============================] - 0s 11us/step - loss: 0.1001
Epoch 2/50
 1000/20433 [>.............................] - ETA: 0s - loss: 0.0527
20433/20433 [==============================] - 0s 3us/step - loss: 0.0430
Epoch 3/50
 1000/20433 [>.............................] - ETA: 0s - loss: 0.0388
20433/20433 [==============================] - 0s 2us/step - loss: 0.0338
Epoch 4/50
 1000/20433 [>.............................] - ETA: 0s - loss: 0.0301
20433/20433 [==============================] - 0s 2us/step - loss: 0.0288
Epoch 5/50
 1000/20433 [>.............................] - ETA: 0s - loss: 0.0300
20433/20433 [==============================] - 0s 2us/step - loss: 0.0259
Epoch 6/50
 1000/20433 [>.............................] - ETA: 0s - loss: 0.0235
20433/20433 [==============================] - 0s 3us/step - loss: 0.0238
Epoch 7/50
 1000/20433 [>.............................] - ETA: 0s - loss: 0.0242
20433/20433 [==============================] - 0s 2us/step - loss: 0.0225
Epoch 8/50
 1000/20433 [>.............................] - ETA: 0s - loss: 0.0213
20433/20433 [==============================] - 0s 2us/step - loss: 0.0218
Epoch 9/50
 1000/20433 [>.............................] - ETA: 0s - loss: 0.0228
20433/20433 [==============================] - 0s 2us/step - loss: 0.0214
Epoch 10/50
 1000/20433 [>.............................] - ETA: 0s - loss: 0.0206
20433/20433 [==============================] - 0s 2us/step - loss: 0.0211