R大型文档集中的术语频率

时间:2018-10-01 13:21:52

标签: r sorting text-mining

我有一个这样的数据框

ID       content
 1       hello you how are you
 1       you are ok
 2       test

我需要按ID来获取内容中空格分隔的每个单词的频率。基本上是在该列中找到唯一项,并找到按ID分组的频率和显示

ID      hello    you   how   are  ok    test
 1        1       3     1    2     1     0
 2        0       0     0    0     0     1    

我尝试了

test<- unique(unlist(strsplit(temp$val, split=" ")))

df<- cbind(temp, sapply(test, function(y) apply(temp, 1, function(x) as.integer(y %in% unlist(strsplit(x, split=" "))))))

这提供了我现在要分组的未分组解决方案,但是我在内容中具有超过20000个唯一值,是否有一种有效的方法来做到这一点?

2 个答案:

答案 0 :(得分:1)

您可以使用data.table

library(data.table)
setDT(df1)[, unlist(strsplit(content, split = " ")), by = ID
           ][, dcast(.SD, ID ~ V1)]
#   ID are hello how ok test you
#1:  1   2     1   1  1    0   3
#2:  2   0     0   0  0    1   0

在第一部分中,我们按unlist(strsplit(content, split = " "))组使用ID,它给出以下输出:

#   ID    V1
#1:  1 hello
#2:  1   you
#3:  1   how
#4:  1   are
#5:  1   you
#6:  1   you
#7:  1   are
#8:  1    ok
#9:  2  test

下一步,我们使用dcast将数据扩展为宽格式。

数据

df1 <- structure(list(ID = c(1L, 1L, 2L), content = c("hello you how are you", 
"you are ok", "test")), .Names = c("ID", "content"), class = "data.frame", row.names = c(NA, 
-3L))

答案 1 :(得分:1)

用于文本挖掘的程序包怎么样?

# your data
text <- read.table(text = "
ID      content
1       'hello you how are you'
1       'you are ok'
2       'test'", header = T,  stringsAsFactors = FALSE) # remember the stringAsFactors life saver!

library(dplyr)
library(tidytext)
# here we put in column all the words
unnested <- text %>%
            unnest_tokens(word, content)

# a classic data.frame from a table of frequencies
as.data.frame.matrix(table(unnested$ID, unnested$word))
  are hello how ok test you
1   2     1   1  1    0   3
2   0     0   0  0    1   0