Python贝叶斯心脏预测,结果不准确

时间:2017-06-16 02:12:51

标签: python machine-learning scikit-learn classification naivebayes

我正在尝试使用朴素贝叶斯进行心脏病预测计划。当我完成分类器时,交叉验证显示平均准确度为80%但是当我尝试对给定样本进行预测时,预测完全错误!数据集是来自UCI存储库的心脏病数据集,它包含303个样本。有两个类0:健康和1:生病,当我尝试对数据集中的样本进行预测时,它不会预测其真实值,除非极少数样本。这是代码:

import pandas as pd
import numpy as np
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import cross_val_score, train_test_split
from sklearn.preprocessing import Imputer, StandardScaler


class Predict:
    def Read_Clean(self,dataset):
        header_row = ['Age', 'Gender', 'Chest_Pain', 'Resting_Blood_Pressure', 'Serum_Cholestrol',
                      'Fasting_Blood_Sugar', 'Resting_ECG', 'Max_Heart_Rate',
                      'Exercise_Induced_Angina', 'OldPeak',
                      'Slope', 'CA', 'Thal', 'Num']
        df = pd.read_csv(dataset, names=header_row)
        df = df.replace('[?]', np.nan, regex=True)
        df = pd.DataFrame(Imputer(missing_values='NaN', strategy='mean', axis=0)
                          .fit_transform(df), columns=header_row)
        df = df.astype(float)
        return df

    def Train_Test_Split_data(self,dataset):
        Y = dataset['Num'].apply(lambda x: 1 if x > 0 else 0)
        X = dataset.drop('Num', axis=1)
        validation_size = 0.20
        seed = 42
        X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=validation_size, random_state=seed)
        return X_train, X_test, Y_train, Y_test

    def Scaler(self, X_train, X_test):
        scaler = StandardScaler()
        X_train = scaler.fit_transform(X_train)
        X_test = scaler.transform(X_test)
        return X_train, X_test

    def Cross_Validate(self, clf, X_train, Y_train, cv=5):
        scores = cross_val_score(clf, X_train, Y_train, cv=cv, scoring='f1')
        score = scores.mean()
        print("CV scores mean: %.4f " % (score))
        return score, scores

    def Fit_Score(self, clf, X_train, Y_train, X_test, Y_test, label='x'):
        clf.fit(X_train, Y_train)
        fit_score = clf.score(X_train, Y_train)
        pred_score = clf.score(X_test, Y_test)
        print("%s: fit score %.5f, predict score %.5f" % (label, fit_score, pred_score))
        return pred_score

    def ReturnPredictionValue(self, clf, sample):
        y = clf.predict([sample])
        return y[0]

    def PredictionMain(self, sample, dataset_path='dataset/processed.cleveland.data'):
        data = self.Read_Clean(dataset_path)
        X_train, X_test, Y_train, Y_test = self.Train_Test_Split_data(data)
        X_train, X_test = self.Scaler(X_train, X_test)
        self.NB = GaussianNB()
        self.Fit_Score(self.NB, X_train, Y_train, X_test, Y_test, label='NB')
        self.Cross_Validate(self.NB, X_train, Y_train, 10)
        return self.ReturnPredictionValue(self.NB, sample)

当我跑步时:

if __name__ == '__main__':
sample = [41.0, 0.0, 2.0, 130.0, 204.0, 0.0, 2.0, 172.0, 0.0, 1.4, 1.0, 0.0, 3.0]
p = Predict()
print "Prediction value: {}".format(p.PredictionMain(sample))

结果是:

  

NB:健康评分0.84711,预测评分0.83607,CV评分平均值:0.8000

     

预测值:1

我得到1而不是0(此样本已经是数据集样本之一)。 我为数据集中的多个样本做了这个,大部分时间我都得到错误的结果,好像精度不是80%!

任何帮助将不胜感激。 提前谢谢。

修改 使用Pipeline解决问题。最终的代码是:

import pandas as pd
import numpy as np
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import cross_val_score, train_test_split
from sklearn.preprocessing import Imputer, StandardScaler, OneHotEncoder
from sklearn.pipeline import Pipeline

class Predict:
    def __init__(self):
        self.X = []
        self.Y = []

    def Read_Clean(self,dataset):
        header_row = ['Age', 'Gender', 'Chest_Pain', 'Resting_Blood_Pressure', 'Serum_Cholestrol',
                      'Fasting_Blood_Sugar', 'Resting_ECG', 'Max_Heart_Rate',
                      'Exercise_Induced_Angina', 'OldPeak',
                      'Slope', 'CA', 'Thal', 'Num']
        df = pd.read_csv(dataset, names=header_row)
        df = df.replace('[?]', np.nan, regex=True)
        df = pd.DataFrame(Imputer(missing_values='NaN', strategy='mean', axis=0)
                          .fit_transform(df), columns=header_row)
        df = df.astype(float)
        return df

    def Split_Dataset(self, df):
        self.Y = df['Num'].apply(lambda x: 1 if x > 0 else 0)
        self.X = df.drop('Num', axis=1)

    def Create_Pipeline(self):
        estimators = []
        estimators.append(('standardize', StandardScaler()))
        estimators.append(('bayes', GaussianNB()))
        model = Pipeline(estimators)
        return model

    def Cross_Validate(self, clf, cv=5):
        scores = cross_val_score(clf, self.X, self.Y, cv=cv, scoring='f1')
        score = scores.mean()
        print("CV scores mean: %.4f " % (score))

    def Fit_Score(self, clf, label='x'):
        clf.fit(self.X, self.Y)
        fit_score = clf.score(self.X, self.Y)
        print("%s: fit score %.5f" % (label, fit_score))

    def ReturnPredictionValue(self, clf, sample):
        y = clf.predict([sample])
        return y[0]

    def PredictionMain(self, sample, dataset_path='dataset/processed.cleveland.data'):
        print "dataset: "+ dataset_path
        data = self.Read_Clean(dataset_path)
        self.Split_Dataset(data)
        self.model = self.Create_Pipeline()
        self.Fit_Score(self.model, label='NB')
        self.Cross_Validate(self.model, 10)
        return self.ReturnPredictionValue(self.model, sample)

现在对问题中的同一样本进行预测会返回[0],这是真正的值。实际上通过运行以下方法:

def CheckTrue(self):
    clf = self.Create_Pipeline()
    out = cross_val_predict(clf, self.X, self.Y)
    p = [out == self.Y]
    c = 0
    for i in range(303):
        if p[0][i] == True:
            c += 1
    print "Samples with true values: {}".format(c)

我使用管道代码获得249个真实样本,而之前我只有150个。

1 个答案:

答案 0 :(得分:2)

您没有将StandardScaler应用于样本。分类器需要在StandardScaler.transform输出上训练的缩放数据,但样本的缩放方式与训练中的相同。

手动组合多个步骤(缩放,预处理,分类)时很容易出错。为避免此类问题,最好使用scikit-learn Pipeline