如何根据训练模型预测测试数据

时间:2019-08-13 16:16:37

标签: python-3.x machine-learning scikit-learn logistic-regression

我已经建立了一个逻辑回归模型来预测条目的借阅状态,即Y或N。我认为它适用于训练数据,但是当我将其应用于测试数据时,它会失败。

我认为我将问题缩小到测试数据集中的“已婚”列,其中缺少已婚数据的数据,但测试集已完成

这是我用于插补的功能

def impute_married(cols):
    Married = cols[0]

    if pd.isnull(Married):
        return 'unknownMarriedStatus'
    else:
        return Married

因此,当将其用于测试数据时,不存在unknownMarriedStatus,这就是为什么其不适用于测试集的原因

下一部分将转换模型的数据。

my_dict = {'0':'zero','1':'one','2':'two','3':'three','3+':'threePlus', 
np.nan: 'missing'}
def convert_data(dataset):
    temp_data = dataset.copy()
    temp_data.Dependents = temp_data.Dependents.map(my_dict)
    temp_data['Gender'] = temp_data[['Gender']].apply(impute_gender,axis=1)
    temp_data['Married'] = temp_data[['Married']].apply(impute_married,axis=1)
    temp_data['Self_Employed'] = temp_data[['Self_Employed']].apply(impute_self_employed,axis=1)
    temp_data['Credit_History'] = temp_data[['Credit_History']].apply(impute_Credit_History,axis=1)

    dependents = pd.get_dummies(temp_data['Dependents'],drop_first=True)
    gender = pd.get_dummies(temp_data['Gender'],drop_first=True)
    married = pd.get_dummies(temp_data['Married'],drop_first=True)
    education = pd.get_dummies(temp_data['Education'],drop_first=True)
    self_employed = pd.get_dummies(temp_data['Self_Employed'],drop_first=True)
    credit_history = pd.get_dummies(temp_data['Credit_History'],drop_first=True)
    property_area = pd.get_dummies(temp_data['Property_Area'],drop_first=True)
    #loan_status = pd.get_dummies(temp_data['Loan_Status'],drop_first=True)
    loan_band = pd.get_dummies(temp_data['Loan_Band'],drop_first=True)



    temp_data.drop(['Loan_ID', 'ApplicantIncome', 'CoapplicantIncome', 'LoanAmount',
   'Loan_Amount_Term', 'Gender','Married','Dependents','Education','Self_Employed','Credit_History',
 'Property_Area','Loan_Band'],axis=1,inplace=True)

    temp_data = pd.concat([temp_data,dependents,gender,married, education, self_employed, credit_history, property_area, loan_band ],axis=1)

    temp_data.dropna(inplace=True)
    return temp_data


    return temp_data

train_dataset = convert_data(train) 

然后

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = 
train_test_split(train_dataset.drop('Loan_Status',axis=1), 

train_dataset['Loan_Status'], 
test_size=0.30, 
                                            random_state=101)

下一个是逻辑回归

from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression(solver="lbfgs")
logmodel.fit(X_train,y_train)
y_pred = logmodel.predict(X_test)

predictions = logmodel.predict(train_dataset.drop('Loan_Status', axis = 1))

然后这给了我报告

from sklearn.metrics import classification_report
print(classification_report(train_dataset['Loan_Status'],predictions))

下一部分是报告

  precision    recall  f1-score   support
N       0.86      0.45      0.59       192
Y       0.80      0.97      0.87       422

avg / total       0.82      0.81      0.79       614

下一个准确性

print('Accuracy of logistic regression classifier on test set: 
{:.2f}'.format(logmodel.score(X_test, y_test)))
pd.crosstab(y_test, y_pred, rownames=['Actual Result'], colnames= 
['Predicted Result'])

test_dataset = convert_data(test) 
predictions = logmodel.predict(test_dataset.drop('Loan_Status', axis = 1))

然后这给了我错误

ValueError                                Traceback (most recent call last)
<ipython-input-144-474a34c1dffa> in <module>()
      1 #predictions = logmodel.predict(test_dataset)
----> 2 predictions = logmodel.predict(test_dataset.drop('Loan_Status', axis = 1))

~/anaconda3_420/lib/python3.5/site-packages/sklearn/linear_model/base.py in predict(self, X)
    322             Predicted class label per sample.
    323         """
--> 324         scores = self.decision_function(X)
    325         if len(scores.shape) == 1:
    326             indices = (scores > 0).astype(np.int)

~/anaconda3_420/lib/python3.5/site-packages/sklearn/linear_model/base.py in decision_function(self, X)
    303         if X.shape[1] != n_features:
    304             raise ValueError("X has %d features per sample; expecting %d"
--> 305                              % (X.shape[1], n_features))
    306 
    307         scores = safe_sparse_dot(X, self.coef_.T,

ValueError: X has 23 features per sample; expecting 24

这是数据集的形状

train.shape

(614,17)

train_dataset.shape

614,25)

test.shape

(367,16)

test_dataset.shape

(367,23)

train_dataset.columns

Index(['Loan_Status','ApplicantIncome_log','CoapplicantIncome_exp',        'LoanAmount_log','one','threePlus','two','zero','Male',        '未知性别','是','未知婚姻状态','未毕业','是',        'unknownEmploymentStatus','是','unknownCredHist','Semiurban',        '城市','B','C','D','E','F','H'],       dtype ='object')

test_dataset.columns

Index(['ApplicantIncome_log','CoapplicantIncome_exp','LoanAmount_log','one',        'threePlus','two','zero','Male','unknownGender','Yes',        “未毕业”,“是”,“ unknownEmploymentStatus”,“是”,        'unknownCredHist','Semiurban','Urban','B','C','D','E','F','H'],       dtype ='object')

我想做到这一点,所以我有一个用于火车数据的逻辑回归模型,然后可以将其应用于测试数据 有人告诉我我的摘要统计信息基于整个test.csv数据集,但是我针对该数据集进行了训练,因此我的分数会被夸大。但我不明白这是什么意思或如何解决。

0 个答案:

没有答案
相关问题