使用哈希字典的词法化函数不适用于R中的tm包

时间:2017-09-08 18:30:20

标签: r text-mining tm quanteda text2vec

我想使用大型外部字典(如下面的txt变量中的格式)将波兰语文本词形化。我不幸运,有一个选择波兰语与流行的文本挖掘包。 @DmitriySelivanov的答案https://stackoverflow.com/a/45790325/3480717适用于简单的文本向量。 (我也从字典和语料库中删除了波兰语变音符号。)该函数适用于文本向量。

不幸的是,它不适用于tm生成的语料库格式。让我粘贴Dmitriy的代码:

make

现在我想将它应用于tm语料库“docs”这里是我将在tm生成的语料库中使用tm包的示例语法。

library(hashmap)
library(data.table)
txt = 
  "Abadan  Abadanem
  Abadan  Abadanie
  Abadan  Abadanowi
  Abadan  Abadanu
  abadańczyk  abadańczycy
  abadańczyk  abadańczykach
  abadańczyk  abadańczykami
  "
dt = fread(txt, header = F, col.names = c("lemma", "word"))
lemma_hm = hashmap(dt$word, dt$lemma)

lemma_hm[["Abadanu"]]
#"Abadan"


lemma_tokenizer = function(x, lemma_hashmap, 
                           tokenizer = text2vec::word_tokenizer) {
  tokens_list = tokenizer(x)
  for(i in seq_along(tokens_list)) {
    tokens = tokens_list[[i]]
    replacements = lemma_hashmap[[tokens]]
    ind = !is.na(replacements)
    tokens_list[[i]][ind] = replacements[ind]
  }
  tokens_list
}
texts = c("Abadanowi abadańczykach OutOfVocabulary", 
          "abadańczyk Abadan OutOfVocabulary")
lemma_tokenizer(texts, lemma_hm)

#[[1]]
#[1] "Abadan"          "abadańczyk"      "OutOfVocabulary"
#[[2]]
#[1] "abadańczyk"      "Abadan"          "OutOfVocabulary"

我试过的另一种语法:

docs <- tm_map(docs, function(x) lemma_tokenizer(x, lemma_hashmap="lemma_hm"))

它给我一个错误:

LemmaTokenizer <- function(x) lemma_tokenizer(x, lemma_hashmap="lemma_hm")

docsTDM <-
  DocumentTermMatrix(docs, control = list(wordLengths = c(4, 25), tokenize=LemmaTokenizer))

该功能适用​​于文本向量,但不适用于tm语料库。提前感谢您的建议(如果它不能与tm一起使用,甚至可以将此函数与其他文本挖掘包一起使用)。

4 个答案:

答案 0 :(得分:2)

我在这里看到两个问题。 1)你的自定义函数返回一个列表,而它应该返回一个字符串向量; 2)你传递了一个错误的lemma_hashmap参数。

解决第一个问题的快速解决方法是在返回函数结果之前使用paste()和sapply()。

lemma_tokenizer = function(x, lemma_hashmap, 
                           tokenizer = text2vec::word_tokenizer) {
  tokens_list = tokenizer(x)
  for(i in seq_along(tokens_list)) {
    tokens = tokens_list[[i]]
    replacements = lemma_hashmap[[tokens]]
    ind = !is.na(replacements)
    tokens_list[[i]][ind] = replacements[ind]
  }

  # paste together, return a vector
  sapply(tokens_list, (function(i){paste(i, collapse = " ")}))
}

我们可以运行您帖子的相同示例。

texts = c("Abadanowi abadańczykach OutOfVocabulary", 
          "abadańczyk Abadan OutOfVocabulary")
lemma_tokenizer(texts, lemma_hm)
[1] "Abadan abadańczyk OutOfVocabulary" "abadańczyk Abadan OutOfVocabulary"

现在,我们可以使用tm_map。只要确保使用lemma_hm(即变量)而不是&#34; lemma_hm&#34; (一个字符串)作为参数。

docs <- SimpleCorpus(VectorSource(texts))
out <- tm_map(docs, (function(x) {lemma_tokenizer(x, lemma_hashmap=lemma_hm)}))
out[[1]]$content
[1] "Abadan abadańczyk OutOfVocabulary"

答案 1 :(得分:2)

在创建将每个变体映射为字典值的字典后,尝试使用 quanteda dictionary()函数作为字典键。下面,它会查找您的值,然后将标记粘贴回文本中。 (如果您需要令牌,则不需要最后paste()次操作。

txt <-  
    "Abadan  Abadanem
Abadan  Abadanie
Abadan  Abadanowi
Abadan  Abadanu
abadańczyk  abadańczycy
abadańczyk  abadańczykach
abadańczyk  abadańczykami"

list_temp <- strsplit(readLines(textConnection(txt)), "\\s+")
list_temp2 <- lapply(list_temp, "[", 2)
names(list_temp2) <- sapply(list_temp, "[", 1)

library("quanteda")
polish_lemma_dict <- dictionary(list_temp2)
# Dictionary object with 7 key entries.
# - Abadan:
#   - abadanem
# - Abadan:
#   - abadanie
# - Abadan:
#   - abadanowi
# - Abadan:
#   - abadanu
# - abadańczyk: 
#   - abadańczycy
# - abadańczyk:
#   - abadańczykach
# - abadańczyk:
#   - abadańczykami

texts <- c("Abadanowi abadańczykach OutOfVocabulary", 
           "abadańczyk Abadan OutOfVocabulary")

现在可以将texts转换为标记,并使用 quanteda tokens_lookup()函数将字典值(变形字)替换为字典键(lemmas) 。在最后一步中,我将标记粘贴在一起,如果您需要标记而不是全文,则可以跳过这些标记。

require(magrittr)
texts %>%
    tokens() %>%
    tokens_lookup(dictionary = polish_lemma_dict, exclusive = FALSE, capkeys = FALSE) %>%
    as.character() %>%
    paste(collapse = " ")
# [1] "Abadan abadańczyk OutOfVocabulary abadańczyk Abadan OutOfVocabulary"

答案 2 :(得分:2)

对于抛光词形还原,请参阅此脚本 使用此polmorfologik字典https://github.com/MarcinKosinski/trigeR5/blob/master/R/lematyzacja.Rhttps://github.com/MarcinKosinski/trigeR5/tree/master/dicts(并且还包括停止单词)。

答案 3 :(得分:1)

以下是我使用答案的完全不完美的代码。对许多人来说,我在底部描述了所有来源。我意识到这是非常粗糙的,但它对我来说很明显,即。我可以使用txt lemmes字典和我的停用词来对波兰文本进行分类。感谢Damiano Fantini,Dmitriy Selivanov和其他许多人。

#----1. Set up. ----
setwd(dirname(rstudioapi::getActiveDocumentContext()$path))


library(readtext)
library(tm)
library(proxy)
library(stringi)
library(stringr)
library(hashmap)
library(data.table)
library(text2vec)

# For reading n-grams
library(RWeka) #(*)
BigramTokenizer <- 
           function(x) NGramTokenizer(x, Weka_control(min = 1, max = 3)) #(*)


#----2. Read data. ----
stopwordsPL <- as.vector(str_split(readLines("polish.stopwords.text",encoding = "UTF-8"), pattern = " ",simplify = T))


docs <- VCorpus(DirSource(pattern="txt"))
titles <- rownames(summary(docs))

docs <- tm_map(docs, removeWords, words=stopwordsPL)
docs <- tm_map(docs, tolower)
docs <- tm_map(docs, function(x) stri_trans_general(x, "Latin-ASCII"))
docs <- tm_map(docs, removeNumbers)
docs <- tm_map(docs, removePunctuation)
docs <- tm_map(docs, stripWhitespace)

# for English texts it would be simpler
# docs <- tm_map(docs, removeWords, stopwords("english")) #can add other words to remove
# docs <- tm_map(docs, stemDocument, "english")

#====3. Lemmatize ====
# # Dictionary from http://www.lexiconista.com/datasets/lemmatization/
# lemmadict_file = "lemmatization-pl.text"
# dt = fread(file= lemmadict_file, header = F, col.names = c("lemma", "word"), data.table=T, encoding="UTF-8")
# # I threw away Polish letters, maybe changing locales may help.
# dt$lemma <- stri_trans_general(dt$lemma, "Latin-ASCII;lower")
# dt$word <- stri_trans_general(dt$word, "Latin-ASCII;lower")
# dt <- unique(dt)
# 
# # Creating hash dictionary
# lemma_hm = hashmap(dt$word, dt$lemma)
# 
# # Test if it works
# lemma_hm[["mnozyl"]]
# # [1] "mnozyc"
# 
# save_hashmap(lemma_hm, file="lemma_hm", overwrite = TRUE, compress = TRUE)

lemma_hm <- load_hashmap(file="lemma_hm")

lemma_tokenizer = function(x, lemma_hashmap, 
                           tokenizer = text2vec::word_tokenizer) {
  tokens_list = tokenizer(x)
  for(i in seq_along(tokens_list)) {
    tokens = tokens_list[[i]]
    replacements = lemma_hashmap[[tokens]]
    ind = !is.na(replacements)
    tokens_list[[i]][ind] = replacements[ind]
  }
  # paste together, return a vector
  sapply(tokens_list, (function(i){paste(i, collapse = " ")}))
}

docs <- tm_map(docs, (function(x) {lemma_tokenizer(x, lemma_hashmap=lemma_hm)}))
docs <- tm_map(docs, PlainTextDocument)

#====4. Create document term matrix====

docsTDM <-
  DocumentTermMatrix(docs, control = list(wordLengths = c(5, 25),tokenize = BigramTokenizer))  #  tokenize=LemmaTokenizer, tokenize = BigramTokenizer (*)


docsTDM$dimnames

#====5. Remove sparse and common words====

docsTDM <- removeSparseTerms(docsTDM, .90)

# https://stackoverflow.com/questions/25905144/removing-overly-common-words-occur-in-more-than-80-of-the-documents-in-r

removeCommonTerms <- function (x, pct) 
{
  stopifnot(inherits(x, c("DocumentTermMatrix", "TermDocumentMatrix")), 
            is.numeric(pct), pct > 0, pct < 1)
  m <- if (inherits(x, "DocumentTermMatrix")) 
    t(x)
  else x
  t <- table(m$i) < m$ncol * (pct)
  termIndex <- as.numeric(names(t[t]))
  if (inherits(x, "DocumentTermMatrix")) 
    x[, termIndex]
  else x[termIndex, ]
}


docsTDM <-
  removeCommonTerms(docsTDM, .8) #remove terms that are in >=80% of the documents
docsTDM$dimnames


#====6. Cluster data (hclust). ====


docsdissim <- dist(as.matrix(docsTDM), method = "cosine")

docsdissim2 <- as.matrix(docsdissim)
dim(docsdissim2)

rownames(docsdissim2) <- titles
colnames(docsdissim2) <- titles

h <- hclust(docsdissim, method = "ward.D2")

plot(h, labels = titles, sub = "")

# Library hclust with p-values (pvclust)

library(pvclust)

h_pv <- pvclust(docsdissim2, method.hclust = "ward.D2", method.dist ="correlation")

plot(h_pv)

data.frame(cutree(tree = h_pv$hclust, k = 4))


# pvclust provides two types of p-values: AU (Approximately Unbiased) p-value and BP (Bootstrap Probability) value. 
# AU p-value, which is computed by multiscale bootstrap resampling, is a better approximation to unbiased p-value 
# than BP value computed by normal bootstrap resampling.
# AU p-value > 0.95 we can assume the clusters exist and may stably be 
# observed if we increase the number of observations. 
# (http://stat.sys.i.kyoto-u.ac.jp/prog/pvclust/)

#==== Literature:====
# Original article:
# http://www.rexamine.com/2014/06/text-mining-in-r-automatic-categorization-of-wikipedia-articles/

# Updates to make it work after some functions became obsolete:
# https://stackoverflow.com/questions/34423823/r-automatic-categorization-of-wikipedia-articles
# https://stackoverflow.com/questions/34372166/error-using-termdocumentmatrix-and-dist-functions-in-r
#
# Based on that:
# http://brazenly.blogspot.co.uk/2015/02/r-categorization-clustering-of.html
#
# Sparse terms:
# https://stackoverflow.com/questions/28763389/how-does-the-removesparseterms-in-r-work

# Lemmatizing function:
# https://stackoverflow.com/questions/46122591/a-lemmatizing-function-using-a-hash-dictionary-does-not-work-with-tm-package-in
# https://stackoverflow.com/questions/45762559/lemmatization-using-txt-file-with-lemmes-in-r/45790325#45790325