朴素贝叶斯的特征选择

时间:2018-01-09 13:34:15

标签: r naivebayes

我用朴素贝叶斯进行了分类。目标是通过文本预测4个因素。数据如下所示:

 'data.frame':  387 obs. of  2 variables:
 $ reviewText: chr  "I love this. I have a D800. I am mention my camera to make sure that you understand that this product is not ju"| __truncated__ "I hate buying larger gig memory cards - because there's always that greater risk of losing the photos, and/or r"| __truncated__ "These chromebooks are really a pretty nice idea -- Almost no maintaince (no maintaince?), no moving parts, smal"| __truncated__ "Purchased, as this drive allows a much speedier read/write and is just below a full SSD (they need to drop the "| __truncated__ ...
 $ pragmatic : Factor w/ 4 levels "-1","0","1","9": 4 4 4 3 3 4 3 3 3...

我使用caret包进行了分类。分类代码如下:

sms_corpus <- Corpus(VectorSource(sms_raw$text))
sms_corpus_clean <- sms_corpus %>%
    tm_map(content_transformer(tolower)) %>% 
    tm_map(removeNumbers) %>%
    tm_map(removeWords, stopwords(kind="en")) %>%
    tm_map(removePunctuation) %>%
    tm_map(stripWhitespace)
sms_dtm <- DocumentTermMatrix(sms_corpus_clean)

train_index <- createDataPartition(sms_raw$type, p=0.5, list=FALSE)
sms_raw_train <- sms_raw[train_index,]
sms_raw_test <- sms_raw[-train_index,]
sms_corpus_clean_train <- sms_corpus_clean[train_index]
sms_corpus_clean_test <- sms_corpus_clean[-train_index]
sms_dtm_train <- sms_dtm[train_index,]
sms_dtm_test <- sms_dtm[-train_index,]

sms_dict <- findFreqTerms(sms_dtm_train, lowfreq= 5) 
sms_train <- DocumentTermMatrix(sms_corpus_clean_train, list(dictionary=sms_dict))
sms_test <- DocumentTermMatrix(sms_corpus_clean_test, list(dictionary=sms_dict))

convert_counts <- function(x) {
    x <- ifelse(x > 0, 1, 0)
    x <- factor(x, levels = c(0, 1), labels = c("Absent", "Present"))
}
sms_train <- sms_train %>% apply(MARGIN=2, FUN=convert_counts)
sms_test <- sms_test %>% apply(MARGIN=2, FUN=convert_counts)


ctrl <- trainControl(method="cv", 10)
set.seed(8)
sms_model1 <- train(sms_train, sms_raw_train$type, method="nb",
                trControl=ctrl)


sms_predict1 <- predict(sms_model1, sms_test)
cm1 <- confusionMatrix(sms_predict1, sms_raw_test$type)

当我以这种方式使用这个模型时,这意味着我同时对所有4个变量进行预测,得到一个低Accuracy:0.5469,混淆矩阵看起来像这样。

          Reference
Prediction -1  0  1  9
        -1  0  0  1  0
        0   0  0  0  0
        1   9  5 33 25
        9  11  3 33 72

当我分别对所有4个变量进行预测时,我得到了更好的结果。分类的代码与上述相同,但我代替df$sensorial <- factor(df$sensorial) df$sensorial <- as.factor(df$sensorial == 9)。对于其他变量,我使用1-10代替9。如果我这样做,我会Accuracy: 0.772获得9Accuracy:0.829获得-1Accuracy:0.9016获得0Accuracy:0.7959的{​​{1}}。另外,结果要好得多。所以它必须与特征选择有关。结果不同的原因可能是不同值的特征通常是相同的。因此,一种可能的解决方案可能是使这些特征更加重要,这些特征仅在存在某个值但不存在其他特征时才会发生。有没有办法以这种方式选择特征,这样如果我同时对所有4个变量进行预测,模型会更好?像加权的术语 - 文档矩阵?

修改

我计算了Cihan Ceyhan所说的四个值的权重:

1

但结果并不是更好prop.table(table(sms_raw_train$type)) -1 0 1 9 0.025773196 0.005154639 0.180412371 0.788659794 modelweights <- ifelse(sms_raw_train$type == -1, (1/table(sms_raw_train$type)[1]) * 0.25, ifelse(sms_raw_train$type == 0, (1/table(sms_raw_train$type)[2]) * 0.25, ifelse(sms_raw_train$type == 1, (1/table(sms_raw_train$type)[3]) * 0.25, ifelse(sms_raw_train$type == 9, (1/table(sms_raw_train$type)[4]) * 0.25,9))))

Accuracy:0.5677

因此,分别计算每个值的结果可能是一个更好的主意,然后将结果加总,就像发布的第二个解决方案一样。

1 个答案:

答案 0 :(得分:3)

在此处使用准确性是一种误导性指标。在您发布的多标记混淆矩阵中,如果仅查看标签-1others,则准确率约为89%。因为您只预测-1一次,并将-1错误地标记为others 20次(9 + 11)。对于所有其他情况,您可以正确地对-1 vs others问题进行分类,因此170/191=89%准确性。但当然这并不意味着该模型按预期工作;几乎所有情况都只是打印others。此机制也是您在单个标签分类中看到更高精度数的原因。

请参阅here以获得有关类不平衡问题的详细概述,以及缓解此问题的可能方法。

thread与您的案例非常相关,所以我建议你看看。