我正在尝试在本地运行以下句子压缩:https://github.com/zhaohengyang/Generate-Parallel-Data-for-Sentence-Compression
所以我复制了文件,并使用conda安装了所有依赖项。我做了一些小的修改,例如从url而不是本地磁盘读取数据,并将他的parallel_data_gen.py捆绑在我的py文件中。
但是当我运行它时,我得到:
Spacy库无法将句子解析为树。请忽略这对句子
----------------59-------------------
reducing sentence: This year the Venezuelan government plans to continue its pace of land expropriations in order to move towards what it terms ``agrarian socialism''.
reducing headline: Venezuelan government to continue pace of land expropriations for ``agrarian socialism''
Traceback (most recent call last):
File "/home/user/dev/projects/python-snippets/zhaohengyang/sentence-compression.py", line 701, in <module>
reduce_sentence(sample)
File "/home/user/dev/projects/python-snippets/zhaohengyang/sentence-compression.py", line 641, in reduce_sentence
sentence_info = parse_info(sentence)
File "/home/user/dev/projects/python-snippets/zhaohengyang/sentence-compression.py", line 616, in parse_info
heads = [index + item[0] for index, item in enumerate(doc.to_array([HEAD]))]
IndexError: invalid index to scalar variable.
由于我是一名新手python用户,我不确定如何解决此问题。
这是我正在运行的用于重现该问题的完整代码:https://gist.github.com/avidanyum/3edfbc96ea22807445ab5307830d41db
失败的内部代码段:
def parse_info(sentence):
doc = nlp(sentence)
heads = [index + item[0] for index, item in enumerate(doc.to_array([HEAD]))]
现在我已经加载了nlp
:
import spacy
print('if you didnt run: python -m spacy download en')
import spacy.lang.en
nlp = spacy.load('en')
有关我的环境的更多信息:
/home/user/home/user/dev/anaconda3/envs/pymachine/bin/python --version
Python 2.7.15 :: Anaconda, Inc.
答案 0 :(得分:0)
快速注意,我正在python 3.6上运行spaCy 2.0,但仅对示例语句进行了快速测试:
nlp = spacy.load('en_core_web_lg')
doc = nlp("Here is a test sentence for me to use.")
我在运行代码时遇到一些错误,这两个错误都在您指定的行中:
heads = [(index, item) for index, item in enumerate(doc.to_array([HEAD]))]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'HEAD' is not defined
这是因为to_array
调用接受list
个对象中的string
个。解决此问题:
# Note that HEAD is now a string, rather than a variable
heads = [(index, item) for index, item in enumerate(doc.to_array(['HEAD']))]
heads
[(0, 3), (1, 1), (2, 1), (3, 0), (4, 18446744073709551615), (5, 1), (6, 18446744073709551614), (7, 18446744073709551612)]
解决了问题。您还会注意到,item
返回的enumerate
是int
或scalar
类型,因此它没有索引属性。摆脱您的index[0]
,这应该可以解决您的问题。
您的方法没有错误:
def parse_info(sentence):
doc = nlp(sentence)
heads = [index + item for index, item in enumerate(doc.to_array(['HEAD']))]