分类器有3000个标签和1000000行,内存错误

时间:2018-03-22 12:36:11

标签: python scikit-learn nlp classification

我基于具有10 ^ 6行的数据集构建分类器,每行约15个字,总共约3000个标签。我已经进行了预处理(包括词干,拆分等),我的Windows是64位,还安装了python 64位版本。我有16演出的RAM和一个i7处理器。在底部,您将找到整个脚本。

问题是内存错误,我不知道如何解决它。我的文字袋不应该用更大的数据集(只有有限数量的单词)变得更大,但是10 ^ 6 x 15000的矩阵(我用15000个最大单词构建了我的文字包)仍然真的很棒大。任何人都可以帮助我解决这个问题的最佳方法,有没有办法将这些词汇分开并以批处理的方式使用它?

import numpy as np
import pandas as pd
import re
from nltk.stem.snowball import SnowballStemmer
from sklearn.preprocessing import LabelEncoder
#from sklearn.feature_extraction import DictVectorizer
from stop_words import get_stop_words

stop_words = get_stop_words('german')

# Importing the dataset
df = pd.read_excel('filename', delimiter = '\t', quoting = 3)
df = df.sample(frac=1).reset_index(drop=True)
#Aanpassen van de kolomnamen voor overzicht
namenKolommen =  list(df.columns.values)
newcols = {
        namenKolommen[2] : 'Short Description 1',
        namenKolommen[3] : 'Short Description 2',
        namenKolommen[4] : 'Type Description',
        namenKolommen[5] : 'Long Description',
        namenKolommen[11] : 'Manufacturer',
        namenKolommen[7] : 'L1',
        }
df.rename(columns = newcols, inplace=True)
print('Start corpus')

AllLabels = df['L1'] 

le1_y = LabelEncoder()
y = le1_y.fit_transform(AllLabels)

Text_input = df['Short Description 1'].fillna('') + ' ' + df['Short Description 2'].fillna('')+ ' ' + df['Type Description'].fillna('') + ' ' + df['Long Description'].fillna('') + ' ' + df['Manufacturer'].fillna('')
Text_input.to_csv('Opgeschoonde lijst.csv')


corpus = []

for i in range(0,len(Text_input)):
    review = re.sub('[^a-zA-Züä0-9()ß-]',' ', str( Text_input[i]))
    #str is tip van internet, blijkbaar klopte datatype niet in die cell
    review = review.lower()
    review = review.split()
    stemmer = SnowballStemmer("german")
    review= [stemmer.stem(word) for word in review if not word in set(stop_words)]
    review =  ' '.join(review )
    corpus.append(review)

import pickle
with open('Opgeschoonde_Lijst_Met_Stemming', 'wb') as fp:
    pickle.dump(corpus, fp)

print('Start predicting model')

from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(max_features = 15000)
X = cv.fit_transform(corpus).toarray()

#splitten in test en train sets
from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 18)

from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)

#Predicting the test results
y_pred = classifier.predict(X_test)
y_pred_strings = le1_y.inverse_transform(y_pred)

#Making the Confusion matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
 accuracy1=np.trace(cm)/len(X_test)

1 个答案:

答案 0 :(得分:1)

您应该在scikit learn documentation中查看out-of-core classification用户指南。

简单地说,某些算法(即并非所有算法)都通过partial_fit方法支持在线分类(和回归)。