使用cleanNLP和stanford-corenlp后端注释西班牙语句子时的编码问题

时间:2019-06-15 18:23:29

标签: r stanford-nlp rjava

我正在尝试用cleanNLPstanford-corenlp后端注释西班牙语句子。当我检查输出令牌时,我注意到所有非ascii字符都被删除,带有这些字符的单词被拆分。

这是一个可重复的示例:

> library(cleanNLP)
> 
> cnlp_init_corenlp(
+   language = "es", 
+   lib_location = "C:/path/to/stanford-corenlp-full-2018-10-05")
Loading required namespace: rJava
> 
> input <- "Esta mañana desperté feliz."
> 
> Encoding(input)
[1] "latin1"
> 
> input <- iconv(input, "latin1", "UTF-8")
> 
> Encoding(input)
[1] "UTF-8"
> 
> myannotation <- cleanNLP::cnlp_annotate(input)
> 
> myannotation$token$word
[1] "ROOT"    "Esta"    "ma"      "ana"     "despert" "feliz"   "."

会话信息:

> sessionInfo()
R version 3.6.0 (2019-04-26)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 17134)

Matrix products: default

locale:
[1] LC_COLLATE=Spanish_Argentina.1252  LC_CTYPE=Spanish_Argentina.1252   
[3] LC_MONETARY=Spanish_Argentina.1252 LC_NUMERIC=C                      
[5] LC_TIME=Spanish_Argentina.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] cleanNLP_2.3.0

loaded via a namespace (and not attached):
[1] compiler_3.6.0    tools_3.6.0       textreadr_0.9.0   data.table_1.12.2
[5] knitr_1.22        xfun_0.6          rJava_0.9-11      XML_3.98-1.19    
> 

1 个答案:

答案 0 :(得分:0)

this GitHub问题中,程序包创建者给了我答案。问题是我的机器的默认编码。我只需要在注释字符串之前添加options(encoding = "UTF-8")