Scrapy网站爬虫返回无效路径错误

时间:2015-10-04 04:42:09

标签: python html xpath web-scraping scrapy

我是Scrapy的新手并且遵循基本文档。

我有一个网站,我正试图抓取一些链接,然后导航其中的一些链接。我特意想要获得Cokelore,College和Computers,我正在使用我的代码

import scrapy 

class DmozSpider(scrapy.Spider): 
    name = "snopes" 
    allowed_domains = ["snopes.com"] 
    start_urls = [ 
            "http://www.snopes.com/info/whatsnew.asp" 
    ] 

    def parse(self, response): 
            print response.xpath('//div[@class="navHeader"]/ul/') 
            filename = response.url.split("/")[-2] + '.html' 
            with open(filename, 'wb') as f: 
                    f.write(response.body)

这是我的错误

2015-10-03 23:17:29 [scrapy] INFO: Enabled item pipelines: 
2015-10-03 23:17:29 [scrapy] INFO: Spider opened
2015-10-03 23:17:29 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-10-03 23:17:29 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-10-03 23:17:30 [scrapy] DEBUG: Crawled (200) <GET http://www.snopes.com/info/whatsnew.asp> (referer: None)
2015-10-03 23:17:30 [scrapy] ERROR: Spider error processing <GET http://www.snopes.com/info/whatsnew.asp> (referer: None)
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py", line 588, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/Users/Gaby/Documents/Code/School/689/tutorial/tutorial/spiders/dmoz_spider.py", line 11, in parse
    print response.xpath('//div[@class="navHeader"]/ul/')
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/http/response/text.py", line 109, in xpath
    return self.selector.xpath(query)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/selector/unified.py", line 100, in xpath
    raise ValueError(msg if six.PY3 else msg.encode("unicode_escape"))
ValueError: Invalid XPath: //div[@class="navHeader"]/ul/
2015-10-03 23:17:30 [scrapy] INFO: Closing spider (finished)
2015-10-03 23:17:30 [scrapy] INFO: Dumping Scrapy stats:

我认为我得到的错误与/ul中的xpath()有关,但我无法弄清楚原因。 //div[@class="navHeader"]可以自行运行,一旦我开始添加属性,它就会开始破坏。

我试图抓取的网站部分是这样的结构

<DIV CLASS="navHeader">CATEGORIES:</DIV>
    <UL>
        <LI><A HREF="/autos/autos.asp">Autos</A></LI>
        <LI><A HREF="/business/business.asp">Business</A></LI>
        <LI><A HREF="/cokelore/cokelore.asp">Cokelore</A></LI>
        <LI><A HREF="/college/college.asp">College</A></LI>
        <LI><A HREF="/computer/computer.asp">Computers</A></LI>
    </UL>
<DIV CLASS="navSpacer"> &nbsp; </DIV>
    <UL>
        <LI><A HREF="/crime/crime.asp">Crime</A></LI>
        <LI><A HREF="/critters/critters.asp">Critter Country</A></LI>
        <LI><A HREF="/disney/disney.asp">Disney</A></LI>
        <LI><A HREF="/embarrass/embarrass.asp">Embarrassments</A></LI>
        <LI><A HREF="/photos/photos.asp">Fauxtography</A></LI>
    </UL>

1 个答案:

答案 0 :(得分:1)

您只需删除尾随/即可。替换:

//div[@class="navHeader"]/ul/

使用:

//div[@class="navHeader"]/ul

请注意,此XPath实际上不匹配页面上的任何内容。 ul元素是导航标题的兄弟 - 使用following-sibling

In [1]: response.xpath('//div[@class="navHeader"]/following-sibling::ul//li/a/text()').extract()
Out[1]: 
[u'Autos',
 u'Business',
 u'Cokelore',
 u'College',
 # ...
 u'Weddings']