使用美丽的汤来清理scrapy中的HTML

时间:2014-02-17 21:44:31

标签: xpath scrapy

我正在使用scrapy来尝试从Google学术搜索中获取我需要的一些数据。作为示例,请考虑以下链接:http://scholar.google.com/scholar?q=intitle%3Apython+xpath

现在,我想从这个页面上删除所有标题。我正在遵循的流程如下:

scrapy shell "http://scholar.google.com/scholar?q=intitle%3Apython+xpath"

它给了我scrapy外壳,我在其中:

>>> sel.xpath('//h3[@class="gs_rt"]/a').extract()

[
 u'<a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.122.4438&amp;rep=rep1&amp;type=pdf"><b>Python </b>Paradigms for XML</a>', 
 u'<a href="https://svn.eecs.jacobs-university.de/svn/eecs/archive/bsc-2009/sbhushan.pdf">NCClient: A <b>Python </b>Library for NETCONF Clients</a>', 
 u'<a href="http://hal.archives-ouvertes.fr/hal-00759589/">PALSE: <b>Python </b>Analysis of Large Scale (Computer) Experiments</a>', 
 u'<a href="http://i.iinfo.cz/r2/kd/xmlprague2007.pdf#page=53"><b>Python </b>and XML</a>', 
 u'<a href="http://www.loadaveragezero.com/app/drx/Programming/Languages/Python/">drx: <b>Python </b>Programming Language [Computers: Programming: Languages: <b>Python</b>]-loadaverageZero</a>', 
 u'<a href="http://www.worldcolleges.info/sites/default/files/py10.pdf">XML and <b>Python </b>Tutorial</a>', 
 u'<a href="http://dl.acm.org/citation.cfm?id=2555791">Zato\u2014agile ESB, SOA, REST and cloud integrations in <b>Python</b></a>', 
 u'<a href="ftp://ftp.sybex.com/4021/4021index.pdf">XML Processing with Perl, <b>Python</b>, and PHP</a>', 
 u'<a href="http://books.google.com/books?hl=en&amp;lr=&amp;id=El4TAgAAQBAJ&amp;oi=fnd&amp;pg=PT8&amp;dq=python+xpath&amp;ots=RrFv0f_Y6V&amp;sig=tSXzPJXbDi6KYnuuXEDnZCI7rDA"><b>Python </b>&amp; XML</a>', 
 u'<a href="https://code.grnet.gr/projects/ncclient/repository/revisions/efed7d4cd5ac60cbb7c1c38646a6d6dfb711acc9/raw/docs/proposal.pdf">A <b>Python </b>Module for NETCONF Clients</a>'
]

如您所见,此输出是需要清理的原始HTML。我现在很清楚如何清理这个HTML。最简单的方法可能只是BeautifulSoup并尝试类似:

t = sel.xpath('//h3[@class="gs_rt"]/a').extract()
soup = BeautifulSoup(t)
text_parts = soup.findAll(text=True)
text = ''.join(text_parts)

这是基于较早的SO question。已经建议使用正则表达式版本,但我猜测BeautifulSoup会更强大。

我是一个scrapy n00b,无法弄清楚如何将它嵌入我的蜘蛛中。我试过了

from scrapy.spider import Spider
from scrapy.selector import Selector
from bs4 import BeautifulSoup

from scholarscrape.items import ScholarscrapeItem

class ScholarSpider(Spider):
    name = "scholar"
    allowed_domains = ["scholar.google.com"]
    start_urls = [
        "http://scholar.google.com/scholar?q=intitle%3Apython+xpath"
    ]

    def parse(self, response):
        sel = Selector(response)
        item = ScholarscrapeItem()        
        t = sel.xpath('//h3[@class="gs_rt"]/a').extract()
        soup = BeautifulSoup(t)
        text_parts = soup.findAll(text=True)
        text = ''.join(text_parts)
        item['title'] = text
        return(item)

但这并没有奏效。任何建议都会有所帮助。


编辑3:根据建议,我已将我的蜘蛛文件修改为:

from scrapy.spider import Spider
from scrapy.selector import Selector
from bs4 import BeautifulSoup

from scholarscrape.items import ScholarscrapeItem

class ScholarSpider(Spider):
    name = "dmoz"
    allowed_domains = ["sholar.google.com"]
    start_urls = [
        "http://scholar.google.com/scholar?q=intitle%3Anine+facts+about+top+journals+in+economics"
    ]

    def parse(self, response):
        sel = Selector(response)
        item = ScholarscrapeItem()        
        titles = sel.xpath('//h3[@class="gs_rt"]/a')

        for title in titles:
            title = item.xpath('.//text()').extract()
            print "".join(title)

但是,我得到以下输出:

2014-02-17 15:11:12-0800 [scrapy] INFO: Scrapy 0.22.2 started (bot: scholarscrape)
2014-02-17 15:11:12-0800 [scrapy] INFO: Optional features available: ssl, http11
2014-02-17 15:11:12-0800 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scholarscrape.spiders', 'SPIDER_MODULES': ['scholarscrape.spiders'], 'BOT_NAME': 'scholarscrape'}
2014-02-17 15:11:12-0800 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-02-17 15:11:13-0800 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-02-17 15:11:13-0800 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-02-17 15:11:13-0800 [scrapy] INFO: Enabled item pipelines:
2014-02-17 15:11:13-0800 [dmoz] INFO: Spider opened
2014-02-17 15:11:13-0800 [dmoz] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-02-17 15:11:13-0800 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-02-17 15:11:13-0800 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-02-17 15:11:13-0800 [dmoz] DEBUG: Crawled (200) <GET http://scholar.google.com/scholar?q=intitle%3Apython+xml> (referer: None)
2014-02-17 15:11:13-0800 [dmoz] ERROR: Spider error processing <GET http://scholar.google.com/scholar?q=intitle%3Apython+xml>
 Traceback (most recent call last):
   File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/base.py", line 1178, in mainLoop
     self.runUntilCurrent()
   File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/base.py", line 800, in runUntilCurrent
     call.func(*call.args, **call.kw)
   File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py", line 368, in callback
     self._startRunCallbacks(result)
   File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py", line 464, in _startRunCallbacks
     self._runCallbacks()
 --- <exception caught here> ---
   File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py", line 551, in _runCallbacks
     current.result = callback(current.result, *args, **kw)
   File "/Users/krishnan/work/research/journals/code/scholarscrape/scholarscrape/spiders/scholar_spider.py", line 20, in parse
     title = item.xpath('.//text()').extract()
   File "/Library/Python/2.7/site-packages/scrapy/item.py", line 65, in __getattr__
     raise AttributeError(name)
 exceptions.AttributeError: xpath

2014-02-17 15:11:13-0800 [dmoz] INFO: Closing spider (finished)
2014-02-17 15:11:13-0800 [dmoz] INFO: Dumping Scrapy stats:
 {'downloader/request_bytes': 247,
  'downloader/request_count': 1,
  'downloader/request_method_count/GET': 1,
  'downloader/response_bytes': 108851,
  'downloader/response_count': 1,
  'downloader/response_status_count/200': 1,
  'finish_reason': 'finished',
  'finish_time': datetime.datetime(2014, 2, 17, 23, 11, 13, 196648),
  'log_count/DEBUG': 3,
  'log_count/ERROR': 1,
  'log_count/INFO': 7,
  'response_received_count': 1,
  'scheduler/dequeued': 1,
  'scheduler/dequeued/memory': 1,
  'scheduler/enqueued': 1,
  'scheduler/enqueued/memory': 1,
  'spider_exceptions/AttributeError': 1,
  'start_time': datetime.datetime(2014, 2, 17, 23, 11, 13, 21701)}
2014-02-17 15:11:13-0800 [dmoz] INFO: Spider closed (finished)


编辑2:我原来的问题完全不同,但我现在确信这是正确的方法。原始问题(以及下面的第一个编辑):

我正在使用scrapy来尝试从Google学术搜索中获取我需要的一些数据。以下面的链接为例:

http://scholar.google.com/scholar?q=intitle%3Apython+xpath

现在,我想从这个页面上删除所有标题。我正在遵循的流程如下:

scrapy shell "http://scholar.google.com/scholar?q=intitle%3Apython+xpath"

它给了我scrapy外壳,我在其中:

>>> sel.xpath('string(//h3[@class="gs_rt"]/a)').extract()
[u'Python Paradigms for XML']

正如您所看到的,这只会选择第一个标题,而不会选择页面上的其他标题。我无法弄清楚我应该修改我的XPath,以便我在页面上选择所有这些元素。非常感谢任何帮助。


编辑1:我的第一个方法是尝试

>>> sel.xpath('//h3[@class="gs_rt"]/a/text()').extract()
[u'Paradigms for XML', u'NCClient: A ', u'Library for NETCONF Clients', 
 u'PALSE: ', u'Analysis of Large Scale (Computer) Experiments', u'and XML', 
 u'drx: ', u'Programming Language [Computers: Programming: Languages: ',
 u']-loadaverageZero', u'XML and ', u'Tutorial', 
 u'Zato\u2014agile ESB, SOA, REST and cloud integrations in ', 
 u'XML Processing with Perl, ', u', and PHP', u'& XML', u'A ', 
 u'Module for NETCONF Clients']

问题在于,如果您查看实际的Google学术搜索页面,您会看到第一个条目实际上是“ Python 范式的XML”,而不是“Paradigms for XML”随着scrapy的回归。我对这种行为的猜测是'Python'被困在标签里面,这就是为什么text()没有做我们希望他做的事情。

2 个答案:

答案 0 :(得分:4)

这是一个非常有趣且相当困难的问题。您遇到的问题涉及标题中的“Python”以粗体显示,并且它被视为节点,而标题的其余部分仅仅是文本,因此text()仅提取文本内容而不是内容<b>节点。

这是我的解决方案。首先获取所有链接:

titles = sel.xpath('//h3[@class="gs_rt"]/a')

然后迭代它们并选择每个节点的所有文本内容,换句话说,加入<b&gt;带有文本节点的节点,用于此链接的每个子节点

for item in titles:
    title = item.xpath('.//text()').extract()
    print "".join(title)

这是有效的,因为在for循环中,您将处理每个链接的子项的文本内容,因此您将能够加入匹配元素。例如,循环中的标题将相等:[u'Python ', u'Paradigms for XML'][u'NCClient: A ', u'Python ', u'Library for NETCONF Clients']

答案 1 :(得分:0)

XPath string()函数只返回传递给它的第一个节点的字符串表示形式。

只需正常提取节点,不要使用string()

sel.xpath('//h3[@class="gs_rt"]/a').extract()

sel.xpath('//h3[@class="gs_rt"]/a/text()').extract()