我正在使用EMR数据。医疗记录中的许多实体被分成两个不同的单词(例如 - CT扫描),但我计划通过使用下划线(CT_Scan)将这些令牌连接到单个单词。有没有更快的方法在庞大的语料库上执行此任务。我的方法是使用" quanteda"包。这是代码段 -
# Sample text
mytexts <- c("The new law included a capital gains tax, and an inheritance tax.",
"New York City has raised taxes: an income tax and inheritance taxes.")
# Tokenize by white space
library(quanteda)
mytoks <- tokens(mytexts, remove_punct = TRUE)
# list of tokens that need to be joined
myseqs <- list(c("tax"), c("income", "tax"), c("capital", "gains", "tax"), c("inheritance", "tax"))
# New list that includes concatenated tokens
clean_toks <- tokens_compound(mytoks, myseqs)
此任务是在大约30亿个令牌和&#34; compound_token&#34;上执行的。功能耗费了大量时间(> 12小时)。有没有更好的方法来解决这个问题?