parLapply和语音标记的一部分

时间:2018-09-10 03:52:00

标签: r parallel-processing

我正在尝试将parLapply与openNLP R软件包一起使用,以对约60万个文档的语料库进行语音标记。但是,虽然我能够成功地对语音标签添加一部分不同的〜90k文档,但在〜600k文档上运行相同代码约25分钟后,我还是遇到了一个奇怪的错误:

Error in checkForRemoteErrors(val) : 10 nodes produced errors; first error: no word token annotations found

这些文件只是数字报纸上的文章,我在身上(在清洁之后)在田野上运行标记器。此字段不过是原始文本,我将其保存到字符串列表中。

这是我的代码:

# I set the Java heap size (memory) allocation - I experimented with different sizes
options(java.parameters = "- Xmx3GB")
# Convert the corpus into a list of strings
myCorpus <- lapply(contentCleaned, function(x){x <- as.String(x)})

# tag Corpus Function
tagCorpus <- function(x, ...){
    s <- as.String(x) # This is a repeat and may not be required
    WTA <- Maxent_Word_Token_Annotator()
    a2 <- Annotation(1L, "sentence", 1L, nchar(s))
    a2 <- annotate(s, WTA, a2)
    a3 <- annotate(s, PTA, a2)
    word_subset <- a3[a3$type == "word"]
    POStags <- unlist(lapply(word_subset$features, `[[`, "POS"))
    POStagged <- paste(sprintf("%s/%s", s[word_subset], POStags), collapse   = " ")
    list(text = s, POStagged = POStagged, POStags = POStags, words = s[word_subset])
}

# I have 12 cores in my box
cl <- makeCluster(mc <- getOption("cl.cores", detectCores()-2))

# I tried both exporting the word token annotator and not
clusterEvalQ(cl, {
    library(openNLP);
    library(NLP);
    PTA <- Maxent_POS_Tag_Annotator();
    WTA <- Maxent_Word_Token_Annotator()
})

# Each cluster node has the following description:
[[1]]
An annotator inheriting from classes
    Simple_Word_Token_Annotator Annotator
    with description
    Computes word token annotations using the Apache OpenNLP Maxent tokenizer employing the default model for language 'en'.

clusterEvalQ(cl, sessionInfo())

# ClusterEvalQ outputs for each worker:

[[1]]
R version 3.4.4 (2018-03-15)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 16.04.5 LTS

Matrix products: default
BLAS: /usr/lib/libblas/libblas.so.3.6.0
LAPACK: /usr/lib/lapack/liblapack.so.3.6.0

locale:
  [1] LC_CTYPE=en_US.UTF-8          LC_NUMERIC=C                    LC_TIME=en_US.UTF-8           LC_COLLATE=en_US.UTF-8       
  [5] LC_MONETARY=en_US.UTF-8       LC_MESSAGES=en_US.UTF-8       LC_PAPER=en_US.UTF-8          LC_NAME=en_US.UTF-8          
  [9] LC_ADDRESS=en_US.UTF-8        LC_TELEPHONE=en_US.UTF-8      LC_MEASUREMENT=en_US.UTF-8    LC_IDENTIFICATION=en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] NLP_0.1-11    openNLP_0.2-6

loaded via a namespace (and not attached):
[1] openNLPdata_1.5.3-4 compiler_3.4.4      parallel_3.4.4      rJava_0.9-10    

packageDescription('openNLP') # Version: 0.2-6
packageDescription('parallel') # Version: 3.4.4

startTime <- Sys.time()
print(startTime)
corpus.tagged <- parLapply(cl, myCorpus, tagCorpus)
endTime <- Sys.time()
print(endTime)
endTime - startTime

请注意,我已经咨询了许多网络论坛,其中最引人注目的是: parallel parLapply setup

但是,这似乎无法解决我的问题。此外,我很困惑为什么该设置适用于〜90k的文章而不适用于〜600k的文章(我总共有12个内核和64GB内存)。任何建议都非常感谢。

1 个答案:

答案 0 :(得分:0)

我已经设法通过直接使用Tyler Rinker的qdap软件包(https://github.com/trinker/qdap)来使其工作。运行大约需要20个小时。这是qdap软件包中的功能pos在一个衬里中完成此操作的方式:

corpus.tagged <- qdap::pos(myCorpus, parallel =TRUE, cores =detectCores()-2)