运行nltk分类器时出现Python内存错误

时间:2014-02-11 22:00:40

标签: python memory-management out-of-memory classification nltk

我在大量文本上运行分类器,这导致内存错误问题。 Python获得大约2GB内存使用,然后返回错误。

我知道加载这么多数据然后尝试处理它会导致错误,我只是不知道一个解决方法,我对python很新。我想我需要“分块”文本输入或逐行处理文本,但我再次不确定如何在我的代码中实际实现它。任何帮助都会很棒。

守则:

import nltk, pickle
from nltk.corpus import stopwords


customstopwords = []

p = open('', 'r')
postxt = p.readlines()

n = open('', 'r')
negtxt = n.readlines()

neglist = []
poslist = []

for i in range(0,len(negtxt)):
    neglist.append('negative')

for i in range(0,len(postxt)):
    poslist.append('positive')

postagged = zip(postxt, poslist)
negtagged = zip(negtxt, neglist)

print "STAGE ONE" 

taggedtweets = postagged + negtagged

tweets = []

for (word, sentiment) in taggedtweets:
    word_filter = [i.lower() for i in word.split()]
    tweets.append((word_filter, sentiment))

def getwords(tweets):
    allwords = []
    for (words, sentiment) in tweets:
            allwords.extend(words)
    return allwords

def getwordfeatures(listoftweets):
    wordfreq = nltk.FreqDist(listoftweets)
    words = wordfreq.keys()
    return words

wordlist = [i for i in getwordfeatures(getwords(tweets)) if not i in stopwords.words('english')]
wordlist = [i for i in getwordfeatures(getwords(tweets)) if not i in customstopwords]

print "STAGE TWO"

def feature_extractor(doc):
    docwords = set(doc)
    features = {}
    for i in wordlist:
        features['contains(%s)' % i] = (i in docwords)
    return features

print "STAGE THREE"

training_set = nltk.classify.apply_features(feature_extractor, tweets)

print "STAGE FOUR"

classifier = nltk.NaiveBayesClassifier.train(training_set)

print "STAGE FIVE"      

f = open('my_classifier.pickle', 'wb')
pickle.dump(classifier, f)
f.close()

0 个答案:

没有答案