我们可以使用nltk决策树/ SVM /随机森林来训练带有文本或字符串列表的数据作为特征吗?

时间:2018-03-26 04:56:36

标签: nltk text-classification

是否可以使用除了朴素贝叶斯之外的算法训练带有文本/字符串/字符串列表功能的数据?我提到http://www.nltk.org/book/ch06.html中给出的性别分类问题 它是用天真的贝叶斯完成的。我们可以使用nltk库中的其他算法吗?

代码:

from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd
from nltk.tokenize import sent_tokenize
import nltk
import random
q = open('question.txt').read()
i = open('notquestion.txt').read()
labeled_sentence = ([(name, 1) for name in sent_tokenize(q)] +[(name, 0) for name in sent_tokenize(i)])
random.shuffle(labeled_sentence)
df = pd.DataFrame(labeled_sentence, columns=['sentence','label'])
trainingSet, testSet = train_test_split(df, test_size=0.2)
vec = TfidfVectorizer(ngram_range=(1,2), tokenizer=df['sentence'].values.astype('str').tolist(),
               min_df=3, max_df=0.9, strip_accents='unicode', use_idf=1,
               smooth_idf=1, sublinear_tf=1 )
x = vec.fit_transform(df['sentence'].values.astype('str'))

#naiveBayers equation
def pr(y_i, y):
    p = x[y==y_i].sum(0)
    return (p+1) / ((y==y_i).sum()+1)

#SVM and logistic regression is more or less the same.
def logistic_regression(y):
    y = y.values
    naive = np.log(pr(1,y) / pr(0,y))
    model = LogisticRegression(C=4, dual=True)
    x_nb = x.multiply(naive)
    return model.fit(x_nb, y), naive

1 个答案:

答案 0 :(得分:0)

好。这个问题含糊不清,现在很清楚。所以Naive Bayes通常与tf-idf和回归技术一起使用。因此,您可以使用以下代码在分类器中获得更好的准确性并做出更好的预测。是的,所有文本输入都应转换为数值,如tf-df或单热编码或单词向量嵌入。 将句子作为文件中的句子列表。

from nltk.tokenize.toktok import ToktokTokenizer
toktok = ToktokTokenizer()
tokenized_sentences = [toktok.tokenize(sent) for sent in sentences] 

vec = TfidfVectorizer(ngram_range=(1,2), tokenizer=tokenized_sentences,
               min_df=3, max_df=0.9, strip_accents='unicode', use_idf=1,
               smooth_idf=1, sublinear_tf=1 )
x = vec.fit_transform(sentences)

#naiveBayers equation
def pr(y_i, y):
    p = x[y==y_i].sum(0)
    return (p+1) / ((y==y_i).sum()+1)

#SVM and logistic regression is more or less the same.
def logistic_regression(y):
    y = y.values
    naive = np.log(pr(1,y) / pr(0,y))
    model = LogisticRegression(C=4, dual=True)
    x_nb = x.multiply(naive)
    return model.fit(x_nb, y), naive

训练包含标记为1或0的问题以及标记为1或0的答案的数据。将两个文本文件加载到文本文件中每行的pandas中,并创建带标签的其他列。测试数据也应该同样通过tf-idf。

model,naive = logistic_regression(train_data)  
pred = model.predict_proba(test_data.multiply(naive)) 

礼貌:Jeremy Howard演讲和笔记本