我一直在努力解决这个泰坦尼克号生存问题。我将x分解为乘客,将y分解为幸存者。但是问题是我无法获得y_pred(即)预测结果。由于所有值均为0。我得到0值作为预测。如果有人可以解决,这对我会有所帮助。作为初学者,这是我的第一个分类器问题
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('C:/Users/Umer/train.csv')
x = df['PassengerId'].values.reshape(-1,1)
y = df['Survived']
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.25,
random_state = 0)
from sklearn.preprocessing import StandardScaler
sc_x = StandardScaler()
x_train = sc_x.fit_transform(x_train)
x_test = sc_x.transform(x_test)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(x_train,y_train)
#predicting the test set results
y_pred = classifier.predict(x_test)
答案 0 :(得分:2)
我无法重现相同的结果,实际上,我复制粘贴了您的代码,但在您描述问题时并没有得到全零,而是得到了:
[0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0]
不过,在您的方法中我注意到了一些您可能想知道的事情:
Pandas read_csv
中的默认分隔符为,
,因此,如果您的数据集变量由tab
分隔(与我拥有的变量相同) ,则应指定这样的分隔符:
df = pd.read_csv('titanic.csv', sep='\t')
PassengerId
并没有模型可以用来预测Survived
人的有用信息,它只是一个连续的数字,随着每个新乘客的增加而增加。一般来说,在分类中,您需要利用所有可以使模型学习的功能(除非有多余的功能不向模型添加任何信息),尤其是在数据集中,这是一个多变量数据集。
没有必要缩放PassengerId
,因为features scaling通常用于特征的大小,单位和范围(例如5kg和5000gms ),正如我提到的,在您的情况下,它只是一个增量整数,对模型没有 real 信息。
最后一件事,您应该将float
的数据类型设为StandardScaler
,以避免出现以下警告:
DataConversionWarning: Data with input dtype int64 was converted to float64 by StandardScaler.
所以您从一开始就要做这样的转换:
x = df['PassengerId'].values.astype(float).reshape(-1,1)
最后,如果您仍然获得相同的结果,请向您的数据集添加一个链接。
提供数据集后,事实证明您得到的结果是正确的,这又是因为我上面提到的原因2
(即PassengerId
没有为模型提供有用的信息)因此无法正确预测!)
您可以在添加数据集中的更多特征之前和之后通过比较log loss自己进行测试:
from sklearn.metrics import log_loss
df = pd.read_csv('train.csv', sep=',')
x = df['PassengerId'].values.reshape(-1,1)
y = df['Survived']
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.25,
random_state = 0)
classifier = LogisticRegression()
classifier.fit(x_train,y_train)
y_pred_train = classifier.predict(x_train)
# calculate and print the loss function using only the PassengerId
print(log_loss(y_train, y_pred_train))
#predicting the test set results
y_pred = classifier.predict(x_test)
print(y_pred)
输出
13.33982681120802
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0]
现在使用许多“ 本应有用的”信息:
from sklearn.metrics import log_loss
df = pd.read_csv('train.csv', sep=',')
# denote the words female and male as 0 and 1
df['Sex'].replace(['female','male'], [0,1], inplace=True)
# try three features that you think they are informative to the model
# so it can learn from them
x = df[['Fare', 'Pclass', 'Sex']].values.reshape(-1,3)
y = df['Survived']
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.25,
random_state = 0)
classifier = LogisticRegression()
classifier.fit(x_train,y_train)
y_pred_train = classifier.predict(x_train)
# calculate and print the loss function with the above 3 features
print(log_loss(y_train, y_pred_train))
#predicting the test set results
y_pred = classifier.predict(x_test)
print(y_pred)
输出
7.238735137632405
[0 0 0 1 1 0 1 1 0 1 0 1 0 1 1 1 0 0 0 0 0 1 0 0 1 1 0 1 1 1 0 1 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 1 0 1 0 1 1 1 0 0 0
0 1 1 0 0 0 0 0 1 0 0 1 1 1 1 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 1 1 1 1 0 1 0
1 0 1 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 1 1 0 1
1 0 0 1 1 0 1 0 1 0 1 1 0 0 1 1 0 0 0 0 0 0 0 1 0 0 1 0 1 0 0 1 0 0 0 0 0
0 1 0 0 1 1 0 1 1 0 0 0 1 0 0 0 1 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 1 0 1
1]
结论:
如您所见,损失的价值更高(比以前更少),现在的预测更加合理!