使用lxml来解析namepaced HTML?

时间:2015-04-10 15:33:32

标签: python html html-parsing lxml pyquery

这让我完全疯了,我一直在努力奋斗好几个小时。任何帮助将非常感激。

我正在使用PyQuery 1.2.9(建立在lxml之上)来抓取this URL。我只想获得.linkoutlist部分中所有链接的列表。

这是我的全部要求:

response = requests.get('http://www.ncbi.nlm.nih.gov/pubmed/?term=The%20cost-effectiveness%20of%20mirtazapine%20versus%20paroxetine%20in%20treating%20people%20with%20depression%20in%20primary%20care')
doc = pq(response.content)
links = doc('#maincontent .linkoutlist a')
print links

但是返回一个空数组。如果我改为使用此查询:

links = doc('#maincontent .linkoutlist')

然后我得到这个HTML:

<div xmlns="http://www.w3.org/1999/xhtml" xmlns:xi="http://www.w3.org/2001/XInclude" class="linkoutlist">
   <h4>Full Text Sources</h4>
   <ul>
      <li><a title="Full text at publisher's site" href="http://meta.wkhealth.com/pt/pt-core/template-journal/lwwgateway/media/landingpage.htm?issn=0268-1315&amp;volume=19&amp;issue=3&amp;spage=125" ref="itool=Abstract&amp;PrId=3159&amp;uid=15107654&amp;db=pubmed&amp;log$=linkoutlink&amp;nlmid=8609061" target="_blank">Lippincott Williams &amp; Wilkins</a></li>
      <li><a href="http://ovidsp.ovid.com/ovidweb.cgi?T=JS&amp;PAGE=linkout&amp;SEARCH=15107654.ui" ref="itool=Abstract&amp;PrId=3682&amp;uid=15107654&amp;db=pubmed&amp;log$=linkoutlink&amp;nlmid=8609061" target="_blank">Ovid Technologies, Inc.</a></li>
   </ul>
   <h4>Other Literature Sources</h4>
   ...
</div>

因此父选择器会返回包含大量<a>标记的HTML。这似乎也是有效的HTML。

更多实验表明,由于某种原因,lxml不喜欢开场div上的xmlns属性。

如何在lxml中忽略它,并像普通HTML一样解析它?

更新:尝试ns_clean,仍然失败:

    parser = etree.XMLParser(ns_clean=True)
    tree = etree.parse(StringIO(response.content), parser)
    sel = CSSSelector('#maincontent .rprt_all a')
    print sel(tree)

3 个答案:

答案 0 :(得分:6)

您需要处理名称空间,包括空名称。

工作解决方案:

from pyquery import PyQuery as pq
import requests


response = requests.get('http://www.ncbi.nlm.nih.gov/pubmed/?term=The%20cost-effectiveness%20of%20mirtazapine%20versus%20paroxetine%20in%20treating%20people%20with%20depression%20in%20primary%20care')

namespaces = {'xi': 'http://www.w3.org/2001/XInclude', 'test': 'http://www.w3.org/1999/xhtml'}
links = pq('#maincontent .linkoutlist test|a', response.content, namespaces=namespaces)
for link in links:
    print link.attrib.get("title", "No title")

打印与选择器匹配的所有链接的标题:

Full text at publisher's site
No title
Free resource
Free resource
Free resource
Free resource

或者,只需将parser设置为"html"并忘记命名空间:

links = pq('#maincontent .linkoutlist a', response.content, parser="html")
for link in links:
    print link.attrib.get("title", "No title")

答案 1 :(得分:2)

祝大家获得标准的XML / DOM解析,以便在大多数HTML上运行。您最好的选择是使用BeautifulSouppip install beautifulsoup4easy_install beautifulsoup4),它对错误构建的结构进行了大量处理。也许这样的事情呢?

import requests
from bs4 import BeautifulSoup

response = requests.get('http://www.ncbi.nlm.nih.gov/pubmed/?term=The%20cost-effectiveness%20of%20mirtazapine%20versus%20paroxetine%20in%20treating%20people%20with%20depression%20in%20primary%20care')
bs = BeautifulSoup(response.content)
div = bs.find('div', class_='linkoutlist')
links = [ a['href'] for a in div.find_all('a') ]

>>> links
['http://meta.wkhealth.com/pt/pt-core/template-journal/lwwgateway/media/landingpage.htm?issn=0268-1315&volume=19&issue=3&spage=125', 'http://ovidsp.ovid.com/ovidweb.cgi?T=JS&PAGE=linkout&SEARCH=15107654.ui', 'https://www.researchgate.net/publication/e/pm/15107654?ln_t=p&ln_o=linkout', 'http://www.diseaseinfosearch.org/result/2199', 'http://www.nlm.nih.gov/medlineplus/antidepressants.html', 'http://toxnet.nlm.nih.gov/cgi-bin/sis/search/r?dbs+hsdb:@term+@rn+24219-97-4']

我知道这不是你想要使用的图书馆,但在DOM方面,我曾多次将我的头撞到墙上。 BeautifulSoup的创造者已经规避了许多倾向于在野外发生的边缘情况。

答案 2 :(得分:0)

如果我记得很久以前我自己也遇到过类似的问题。您可以通过将命名空间映射到None来“忽略”命名空间:

sel = CSSSelector('#maincontent .rprt_all a', namespaces={None: "http://www.w3.org/1999/xhtml"})