神经网络在训练时会失去Nan

时间:2020-06-26 22:45:02

标签: python pandas keras neural-network regression

我正在用Python训练Keras神经网络。当模型处于训练状态时,损失为NAN。我不知道为什么会这样。输入中没有NAN值。这是代码。

    def train_model(self, epochs, batch_size, verbose=1, layer_sizes=[], activation_function='relu',
                    loss='mean_squared_error', optimizer='adam'):
        layer_sizes = list(layer_sizes)
        model = Sequential()
        model.add(Dense(self.features.shape[1], input_dim=self.features.shape[1], kernel_initializer='normal',
                        activation=activation_function))
        for i in range(len(layer_sizes)):
            model.add(Dense(layer_sizes[i], kernel_initializer='normal', activation=activation_function))
        model.add(Dense(self.targets.shape[1], kernel_initializer='normal', activation=activation_function))
        model.compile(loss=loss, optimizer=optimizer)
        model.fit(self.X_train, self.Y_train, epochs=epochs, verbose=verbose, batch_size=batch_size)
        self.model = model

具有以下输出

   128/857336 [..............................] - ETA: 58:15 - loss: nan
   384/857336 [..............................] - ETA: 21:36 - loss: nan
   640/857336 [..............................] - ETA: 14:12 - loss: nan
   896/857336 [..............................] - ETA: 11:01 - loss: nan

并且继续进行下去

在这里测试nans

print(df.isnull().values.any())

False

这是带有示例数据的CSV链接。

https://drive.google.com/file/d/1FJqcEmTQ24WebelyLRkGOuPFlSUJt92c/view?usp=sharing

这是构造函数代码

        if data_file == '':
            self.engine = create_engine(
                'postgresql://{}:{}@{}:{}/{}'.format(Model.user, Model.password, Model.host, Model.port,
                                                     Model.database))
            data = [chunk for chunk in
                    pd.read_sql('select * from "{}"'.format(Model.table), self.engine, chunksize=200000)]
            df = pd.DataFrame(columns=data[0].columns)
            for datum in data:
                df = pd.concat([df, datum])
            df.to_hdf('Cleaned_Data.h5', key='df', mode='w')
        else:
            df = pd.read_hdf(data_file)
        df = df.fillna(0)
        df = df.head(1000)
        df.to_csv('Minimum_sample.csv')
        print(df.isnull().values.any())
        columns = list(df.columns)
        misc_data, self.targets, self.features = columns[0:5], columns[6:9], columns[5:6]
        misc_data.extend(columns[9:10])
        misc_data.extend(columns[12:13])
        misc_data.extend(columns[15:16])
        self.targets.extend(columns[10:12])
        self.targets.extend(columns[13:15])
        self.targets.extend(columns[16:26])
        self.features.extend(columns[73:470])
        df = df[misc_data + self.targets + self.features]
        self.targets = df[self.targets].values
        self.features = df[self.features].values
        self.X_train, self.X_test, self.Y_train, self.Y_test = train_test_split(self.features, self.targets,
                                                                                test_size=test_split_size)

任何帮助将不胜感激!

1 个答案:

答案 0 :(得分:0)

您需要以某种方式标准化您的输入。试试这个:

from sklearn import preprocessing
scalerx = preprocessing.StandardScaler().fit(self.X_train)
self.X_train = scalerx.transform(self.X_train)
self.X_test = scalerx.transform(self.X_test)
相关问题