使用scikit-learn加载文本数据时出现问题?

时间:2015-01-04 02:35:08

标签: python python-2.7 machine-learning nlp scikit-learn

我使用自己的数据将一些数据分为两类,所以让我们:

from sklearn.datasets import load_files
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB

# Load the text data
categories = [
    'CLASS_1',
    'CLASS_2',
]

text_train_subset = load_files('train',
    categories=categories)

text_test_subset = load_files('test',
    categories=categories)

# Turn the text documents into vectors of word frequencies
vectorizer = CountVectorizer()
X_train = vectorizer.fit_transform(text_train_subset)
y_train = text_train_subset.target


classifier = MultinomialNB().fit(X_train, y_train)
print("Training score: {0:.1f}%".format(
    classifier.score(X_train, y_train) * 100))

# Evaluate the classifier on the testing set
X_test = vectorizer.transform(text_test_subset.data)
y_test = text_test_subset.target
print("Testing score: {0:.1f}%".format(
    classifier.score(X_test, y_test) * 100))

对于上面的代码和documentation,我有以下目录模式:

data_folder/

    train_folder/
        CLASS_1.txt CLASS_2.txt
    test_folder/
        test.txt

然后我收到此错误:

    % (size, n_samples))
ValueError: Found array with dim 0. Expected 5

我也试过fit_transform但仍然一样。我怎样才能解决这个问题?

1 个答案:

答案 0 :(得分:3)

第一个问题是你的目录结构错误了。 You need it to be like

container_folder/
    CLASS_1_folder/
        file_1.txt, file_2.txt ... 
    CLASS_2_folder/
        file_1.txt, file_2.txt, ....

您需要在此目录结构中同时具有train和test集。或者,您可以将所有数据放在一个目录中,并使用train_test_split将其拆分为两个。

其次,

X_train = vectorizer.fit_transform(text_train_subset)

需要

X_train = vectorizer.fit_transform(text_train_subset.data) # added .data

这是一个完整而有效的例子:

from sklearn.datasets import load_files
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB

text_train_subset = load_files('sample-data/web')
text_test_subset = text_train_subset # load your actual test data here

# Turn the text documents into vectors of word frequencies
vectorizer = CountVectorizer()
X_train = vectorizer.fit_transform(text_train_subset.data)
y_train = text_train_subset.target


classifier = MultinomialNB().fit(X_train, y_train)
print("Training score: {0:.1f}%".format(
    classifier.score(X_train, y_train) * 100))

# Evaluate the classifier on the testing set
X_test = vectorizer.transform(text_test_subset.data)
y_test = text_test_subset.target
print("Testing score: {0:.1f}%".format(
    classifier.score(X_test, y_test) * 100))

sample-data/web的目录结构是

sample-data/web
├── de
│   ├── apollo8.txt
│   ├── fiv.txt
│   ├── habichtsadler.txt
└── en
    ├── elizabeth_needham.txt
    ├── equipartition_theorem.txt
    ├── sunderland_echo.txt
    └── thespis.txt