世界各地的Hello程序员。将数据输入到机器学习模型中时遇到问题。
我尝试使用熊猫将CSV文件读取到python中,然后将其拆分为训练和测试数据。之后,我使用StandardScaler缩放结果,并且由于某种原因,当我进入进食部分时,我的训练数据中包含NaN。 PS:可以肯定是因为我丢失了数据,而是因为我有无限的数据
这就是我所拥有的代码。...
# Importing and organizing required packages and libraries
import pandas as pd;
import numpy as np;
from sklearn.model_selection import train_test_split;
from sklearn.metrics import confusion_matrix, classification_report;
from sklearn.preprocessing import StandardScaler;
from sklearn.ensemble import RandomForestClassifier;
from sklearn.neural_network import MLPClassifier;
#Reading in all of the excel files created from preprocessing.py
dataframe2 = pd.read_csv('dataframe2.csv');
dataframe3 = pd.read_csv('dataframe3.csv');
dataframe4 = pd.read_csv('dataframe4.csv');
dataframe5 = pd.read_csv('dataframe5.csv');
#Function used for creating class labels
def labelCreation(dataframe):
labels = [];
index = dataframe['LoC'].index.values;
for i in range(len(index)):
if str(dataframe.iloc[i]['Unnamed: 0']) == str(dataframe.iloc[i]['Replacing_line_number']):
labels.append('1');
else:
labels.append('0');
return labels;
#Picking features for training
def features(dataframe):
X = dataframe[['Similar_Chars','Similar_Tokens','Edit_Distance','LoC_SemiColon','Replacement_Line_SemiColon','LoC_Open_Bracket_Char',
'Replacement_Line_Open_Bracket_Char','LoC_Close_Bracket_Char','Replacement_Line_Close_Bracket_Char']];
return X;
#Training and splitting the data
X_train, X_test, Y_train, Y_test = train_test_split(features(dataframe = dataframe2), labelCreation(dataframe = dataframe2), test_size=0.2);
#X_train, X_test, Y_train, Y_test = train_test_split(features(dataframe = dataframe3), labelCreation(dataframe = dataframe3), test_size=0.2);
#X_train, X_test, Y_train, Y_test = train_test_split(features(dataframe = dataframe4), labelCreation(dataframe = dataframe4), test_size=0.2);
#X_train, X_test, Y_train, Y_test = train_test_split(features(dataframe = dataframe5), labelCreation(dataframe = dataframe5), test_size=0.2);
#Scalling is added in order to get the optimized result
sc = StandardScaler();
X_train = sc.fit_transform(X_train);
X_test = sc.transform(X_test);
#Feeding the data into a random forest classifier model
rfc = RandomForestClassifier(n_estimators = 200);
rfc.fit(X_train, Y_train);
pred_rfc = rfc.predict(X_test);
#Let's see how well the model performed
print(classification_report(Y_test, pred_rfc));
print(confusion_matrix(Y_test, pred_rfc));
#Feeding the data into a neural network model
mlpc=MLPClassifier(hidden_layer_sizes=(11,11,11), max_iter=500);
mlpc.fit(X_train, Y_train);
pred_mlpc = mlpc.predict(X_test);
#Let's see how well the model performed
print(classification_report(Y_test, pred_mlpc));
print(confusion_matrix(Y_test, pred_mlpc));
当我运行上面的所有代码,然后输入X_train[:10]
时,我得到了
array([[-0.49869515, -0.39609005, -1.2919533 , -0.96747226, 0.74307391,
1.02449721, 0.59744363, 1.06693051, 0.58006304],
[-0.49869515, -0.39609005, 1.22954406, 1.03362137, 0.74307391,
-0.97608856, 0.59744363, -0.93726817, 0.58006304],
[ nan, nan, nan, nan, nan,
nan, nan, nan, nan],
[-0.49869515, -0.39609005, -0.67191297, -0.96747226, -1.34576115,
-0.97608856, 0.59744363, -0.93726817, 0.58006304],
[ nan, nan, nan, nan, nan,
nan, nan, nan, nan],
[ 0.09153914, -0.39609005, -0.75458501, 1.03362137, 0.74307391,
-0.97608856, 0.59744363, -0.93726817, 0.58006304],
[-0.49869515, -0.39609005, -0.50656888, -0.96747226, 0.74307391,
-0.97608856, 0.59744363, -0.93726817, 0.58006304],
[-0.49869515, -0.39609005, -0.79592103, -0.96747226, 0.74307391,
1.02449721, -1.67379807, 1.06693051, -1.72395057],
[ 0.68177344, 2.20020466, 0.48549566, -0.96747226, -1.34576115,
1.02449721, -1.67379807, 1.06693051, -1.72395057],
[-0.20357801, -0.39609005, -0.58924092, 1.03362137, 0.74307391,
1.02449721, 0.59744363, 1.06693051, 0.58006304]])
当我运行X_test[:10]
时,我也会得到类似的结果
array([[ 3.04271061, 1.33477309, -2.11867374, 1.03362137, 0.74307391,
1.02449721, 0.59744363, 1.06693051, 0.58006304],
[-0.49869515, 0.46934152, -0.13454468, -0.96747226, -1.34576115,
1.02449721, 0.59744363, -0.93726817, 0.58006304],
[ 0.09153914, -0.39609005, -0.75458501, 1.03362137, 0.74307391,
1.02449721, 0.59744363, 1.06693051, 0.58006304],
[-0.20357801, -0.39609005, 1.43622417, 1.03362137, -1.34576115,
1.02449721, 0.59744363, 1.06693051, 0.58006304],
[ nan, nan, nan, nan, nan,
nan, nan, nan, nan],
[-0.49869515, -0.39609005, -1.45729739, -0.96747226, -1.34576115,
-0.97608856, 0.59744363, -0.93726817, 0.58006304],
[ 1.27200773, 2.20020466, -0.25855274, 1.03362137, 0.74307391,
1.02449721, 0.59744363, 1.06693051, 0.58006304],
[-0.20357801, -0.39609005, -1.12660921, 1.03362137, -1.34576115,
-0.97608856, 0.59744363, -0.93726817, 0.58006304],
[ nan, nan, nan, nan, nan,
nan, nan, nan, nan],
[-0.49869515, -0.39609005, -0.96126512, -0.96747226, -1.34576115,
-0.97608856, 0.59744363, -0.93726817, 0.58006304]])
要点是,我不知道为什么会有这些NaN,只是我推测我可能有无穷大的值,因为我确保我没有任何遗漏的值。
希望这为我的问题提供了足够的背景知识。如果有人可以伸出援手,将不胜感激。
答案 0 :(得分:1)
我自己也遇到了类似的问题,即从csv文件中读取了NaN之后,将其放入了DataFrame。我的问题是正在写入包含NaN的csv文件的信息,这也导致了同样的问题。您必须采取的一种选择是,仅在csv文件中搜索NaN,然后查看问题是否出在这里。无论如何,如果您仍然希望将数据通过神经网络传递而不会出现错误,则只需将其从数据集中删除即可。我使用numpy加载了我的照片:
dataset = np.loadtxt("./CSV Files/Dataset.csv", delimiter=",")
dataset = dataset[~np.any(np.isnan(dataset), axis=1)]
第二行搜索原始数组中的元素列表,并将其连接起来以删除任何包含NaN的元素,这样可以通过神经网络馈送数据。我的数据集是一个二维数组,因此如果包含NaN元素,它将删除整个数组元素。一个提醒是,如果您在一个单独的文件中具有基本事实,并且它们与NaN元素相关联,那么您也将希望删除这些事实。您要做的就是从数据集中获取索引,并删除基本事实列表中这些索引处的元素:
nanIndex = np.argwhere(np.isnan(dataset))
nanIndex = np.delete(nanIndex, 1, 1)
nanIndex = np.unique(nanIndex)
truthValues = np.delete(truthValues, nanIndex)
其中trueValues是您的2d标签列表(再次针对2d列表问题,如果只是1d,则有所不同)。该代码的作用是在您的数据集中创建一个NaN位置的二维数组。我只是将其串联为x值或唯一行。 例如,nanIndex最初是:(第1行)
[[153 0]
[153 1]
[153 2]
[154 0]
[154 1]]
并转换为:(第2行)
[[153]
[153]
[153]
[154]
[154]]
最后变成:(第3行)
[[153]
[154]]
然后将这些位置从第4行的地面真相数组中删除。
我希望这可以帮助您解决问题,我知道它不能给您明确的答案,为什么您的数据框中存在NaN,但这可以帮助您避免无法通过的问题您的神经网络。这可能不是摆脱二维数组中NaN的最有效方法,但是它可以工作,因此,如果有人有更好的方法,请随时通知我!