python中的sklearn内存错误

时间:2016-12-14 18:22:50

标签: python numpy

我尝试使用python 2.7和scipy 0.18.1实现一个简单的机器学习应用程序我分享示例代码和下面的列车数据下载链接,你可以复制粘贴并运行,我的问题是当我得到我有的线路时“记忆错误“

  

预测= model.predict_proba(test_data [features])

我在互联网上搜索但无法修复我感谢任何帮助..

您可以通过此链接找到示例数据:https://www.kaggle.com/c/sf-crime/data

import pandas as pd
from sklearn.cross_validation import train_test_split
from sklearn import preprocessing
from sklearn.metrics import log_loss
from sklearn.naive_bayes import BernoulliNB
from sklearn.linear_model import LogisticRegression
import numpy as np

# Load Data with pandas, and parse the first column into datetime
train = pd.read_csv('train.csv', parse_dates=['Dates'])
test = pd.read_csv('test.csv', parse_dates=['Dates'])

# Convert crime labels to numbers
le_crime = preprocessing.LabelEncoder()
crime = le_crime.fit_transform(train.Category)

# Get binarized weekdays, districts, and hours.
days = pd.get_dummies(train.DayOfWeek)
district = pd.get_dummies(train.PdDistrict)
hour = train.Dates.dt.hour
hour = pd.get_dummies(hour)

# Build new array
train_data = pd.concat([hour, days, district], axis=1)
train_data['crime'] = crime

# Repeat for test data
days = pd.get_dummies(test.DayOfWeek)
district = pd.get_dummies(test.PdDistrict)

hour = test.Dates.dt.hour
hour = pd.get_dummies(hour)

test_data = pd.concat([hour, days, district], axis=1)

training, validation = train_test_split(train_data, train_size=.60)

features = ['Friday', 'Monday', 'Saturday', 'Sunday', 'Thursday', 'Tuesday',
            'Wednesday', 'BAYVIEW', 'CENTRAL', 'INGLESIDE', 'MISSION',
            'NORTHERN', 'PARK', 'RICHMOND', 'SOUTHERN', 'TARAVAL', 'TENDERLOIN']

training, validation = train_test_split(train_data, train_size=.60)
model = BernoulliNB()
model.fit(training[features], training['crime'])
predicted = np.array(model.predict_proba(validation[features]))
log_loss(validation['crime'], predicted)

# Logistic Regression for comparison
model = LogisticRegression(C=.01)
model.fit(training[features], training['crime'])
predicted = np.array(model.predict_proba(validation[features]))
log_loss(validation['crime'], predicted)

model = BernoulliNB()
model.fit(train_data[features], train_data['crime'])
predicted = model.predict_proba(test_data[features]) #MemoryError!!!!

# Write results
result = pd.DataFrame(predicted, columns=le_crime.classes_)
result.to_csv('testResult.csv', index=True, index_label='Id')

EDITED: 错误堆栈跟踪ss: enter image description here

2 个答案:

答案 0 :(得分:1)

如果您尝试以块的形式进行预测,该怎么办?例如,您可以尝试:

N_split = 10
split_data = np.array_split(test_data[features], N_split)
split_predicted = []
for data in split_data:
    split_predicted.append( model.predict_proba(data) )

predicted = np.concatenate(split_predicted)

答案 1 :(得分:0)

您打算执行的流程没有足够的RAM