文档说我可以:
lxml可以从本地文件,HTTP URL或FTP URL进行解析。它也是 自动检测并读取gzip压缩的XML文件(.gz)。
(来自{Parsers下的http://lxml.de/parsing.html)
但是快速的实验似乎暗示了其他原因:
Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 10:45:13) [MSC v.1600 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from lxml import etree
>>> parser = etree.HTMLParser()
>>> from urllib.request import urlopen
>>> with urlopen('https://pypi.python.org/simple') as f:
... tree = etree.parse(f, parser)
...
>>> tree2 = etree.parse('https://pypi.python.org/simple', parser)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "lxml.etree.pyx", line 3299, in lxml.etree.parse (src\lxml\lxml.etree.c:72655)
File "parser.pxi", line 1791, in lxml.etree._parseDocument (src\lxml\lxml.etree.c:106263)
File "parser.pxi", line 1817, in lxml.etree._parseDocumentFromURL (src\lxml\lxml.etree.c:106564)
File "parser.pxi", line 1721, in lxml.etree._parseDocFromFile (src\lxml\lxml.etree.c:105561)
File "parser.pxi", line 1122, in lxml.etree._BaseParser._parseDocFromFile (src\lxml\lxml.etree.c:100456)
File "parser.pxi", line 580, in lxml.etree._ParserContext._handleParseResultDoc (src\lxml\lxml.etree.c:94543)
File "parser.pxi", line 690, in lxml.etree._handleParseResult (src\lxml\lxml.etree.c:96003)
File "parser.pxi", line 618, in lxml.etree._raiseParseError (src\lxml\lxml.etree.c:95015)
OSError: Error reading file 'https://pypi.python.org/simple': failed to load external entity "https://pypi.python.org/simple"
>>>
我可以使用urlopen方法,但文档似乎暗示传递URL在某种程度上更好。另外,如果文档不准确,我有点担心依赖lxml,特别是如果我开始需要做更复杂的事情。
从已知的URL解析带有lxml的HTML的正确方法是什么?我应该在哪里看到记录?
更新:如果我使用http
网址而不是https
网址,则会收到同样的错误。
答案 0 :(得分:9)
问题是lxml不支持HTTPS网址,http://pypi.python.org/simple重定向到HTTPS版本。
因此,对于任何安全网站,您需要自己阅读网址:
from lxml import etree
from urllib.request import urlopen
parser = etree.HTMLParser()
with urlopen('https://pypi.python.org/simple') as f:
tree = etree.parse(f, parser)