我正在尝试清除我的数据以将其删除; i。)特殊字符(例如 + _),ii。)特定单词(例如,转推,追随者,不能,更好的人)iii。)未出现在英语词典中的单词我正在使用Quanteda库。我的目标是获取前50个二元组并将它们绘制在图形上。
install.packages("textcat")
library(tm)
library(textcat)
the_data <- read.csv("twitterData.csv")
tweets_data <- the_data$x
tweets_corpus <- Corpus(VectorSource(tweets_data))
subSpace <- content_transformer(function(x, pattern) gsub(pattern,
" ", x))
twitterHandleRemover <- function(x) gsub("@\\S+","", x)
shortWordRemover <- function(x) gsub('\\b\\w{1,5}\\b','',x)
urlRemover <- function(x) gsub("http:[[:alnum:]]*","", x)
hashtagRemover <- function(x) gsub("#\\S+","", x)
tweets_corpus <- tm_map(tweets_corpus, subSpace, "/")
tweets_corpus <- tm_map(tweets_corpus, subSpace, "@")
tweets_corpus <- tm_map(tweets_corpus, subSpace, "\\|%&*#+_><")
tweets_corpus <- tm_map(tweets_corpus, content_transformer(tolower))
tweets_corpus <- tm_map(tweets_corpus, removeNumbers)
tweets_corpus <- tm_map(tweets_corpus, content_transformer(urlRemover))
tweets_corpus <- tm_map(tweets_corpus,
content_transformer(shortWordRemover))
tweets_corpus <- tm_map(tweets_corpus,
content_transformer(twitterHandleRemover))
tweets_corpus <- tm_map(tweets_corpus,
content_transformer(hashtagRemover))
tweets_corp<- corpus(tweets_corpus)
tweets_dfm <- tokens(tweets_corp, remove_numbers = T,
remove_hyphens = T) %>%
tokens_remove("\\p{P}", valuetype = "regex", padding=TRUE) %>%
tokens_remove(stopwords("english"), padding=TRUE) %>%
tokens_remove("\\d+", padding = TRUE) %>%
tokens_ngrams(n=2) %>% dfm()
topfeatures(tweets_dfm,50)
这是我代码的输出:
我尝试使用
specialChars <- function(x) gsub("[^[:alnum:]///']","", x)
tweets_corpus <- tm_map(tweets_corpus,
content_transformer(specialChars))
删除特殊字符,但似乎删除了所有字符-输出为数字(0)
答案 0 :(得分:0)
为什么不做这样的事情:
> x <- "je n'aime pas ça"
> Encoding(x)
[1] "latin1"
> iconv(x, from = "latin1", to = "ASCII//TRANSLIT")
[1] "je n'aime pas ca"
假设您的数据在latin1中,iconv(tweets_data, from = "latin1", to = "ASCII//TRANSLIT")
也会这样做
接下来仅保留字母数字字符或空格
gsub(pattern = "[^[:alnum:][:space:]]", " ", "<friends @symbols")