Scrapy SgmlLinkExtractor规则不起作用

时间:2015-01-20 16:21:35

标签: python scrapy

我在youtube上的教程中创建了一个文件。它应该废弃此页面:http://sfbay.craigslist.org/npo/。但我认为craigslist html代码现在已经改变,它不再起作用了。我猜问题是有道理的。页面的下一页有

<a href="/search/npo?s=100&amp;" class="button next" title="next page"> sonraki &gt; </a>

我看到一个问题和我一样,但没有提到页面html已经改变了。 Scrapy - doesn't crawl

下一页是

<a href="index100.html">next 100 postings</a>

当时。

你能告诉我问题在哪里吗?问题正则表达式是“/ search / npo?s = \ d00”?

编辑:

尝试了allow =('search / npo \?s = \ d00',)但它也没有用,但是当我尝试时

start_urls = ["http://sfbay.tr.craigslist.org/search/npo?"]

allow=('s=\d00',)

它有效

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from craiglist_sample.items import CraiglistSampleItem

class MySpider(CrawlSpider):
    name = "craigs"
    allowed_domains = ["craigslist.org"]
    start_urls = ["http://sfbay.craigslist.org/npo/"]


    rules = (Rule (SgmlLinkExtractor(allow=("/search/npo?s=\d00",),restrict_xpaths=('//a[@class="button next"]',))
    , callback="parse_items", follow= True),
    )

    def parse_items(self, response):
        hxs = HtmlXPathSelector(response)
        titles = hxs.select('//span[@class="pl"]')
        items = []
        for titles in titles:
            item = CraiglistSampleItem()
            item ["title"] = titles.select("a/text()").extract()
            item ["link"] = titles.select("a/@href").extract()
            items.append(item)
        return(items)

控制台日志:

C:\Users\bigM\Desktop\craiglist_sample>scrapy crawl craigs
C:\Python27\lib\site-packages\twisted-14.0.2-py2.7-win32.egg\twisted\internet\_sslverify.py:184: UserWarning: Your version of pyOpenSSL, 0.11, is out of date.  Please upgrade to at least 0.1
2 and install service_identity from <https://pypi.python.org/pypi/service_identity>. Without the service_identity module and a recent enough pyOpenSSL tosupport it, Twisted can perform only
rudimentary TLS client hostnameverification.  Many valid certificate/hostname mappings may be rejected.
  verifyHostname, VerificationError = _selectVerifyImplementation()
2015-01-20 18:18:37+0200 [scrapy] INFO: Scrapy 0.24.4 started (bot: craiglist_sample)
2015-01-20 18:18:37+0200 [scrapy] INFO: Optional features available: ssl, http11
2015-01-20 18:18:37+0200 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'craiglist_sample.spiders', 'SPIDER_MODULES': ['craiglist_sample.spiders'], 'BOT_NAME': 'craiglist_sample'}
2015-01-20 18:18:37+0200 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2015-01-20 18:18:37+0200 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRef
reshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-01-20 18:18:37+0200 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-01-20 18:18:37+0200 [scrapy] INFO: Enabled item pipelines:
2015-01-20 18:18:37+0200 [craigs] INFO: Spider opened
2015-01-20 18:18:37+0200 [craigs] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-01-20 18:18:37+0200 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-01-20 18:18:37+0200 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-01-20 18:18:39+0200 [craigs] DEBUG: Redirecting (301) to <GET http://sfbay.craigslist.org/search/npo> from <GET http://sfbay.craigslist.org/npo/>
2015-01-20 18:18:40+0200 [craigs] DEBUG: Crawled (200) <GET http://sfbay.craigslist.org/search/npo> (referer: None)
2015-01-20 18:18:40+0200 [craigs] INFO: Closing spider (finished)
2015-01-20 18:18:40+0200 [craigs] INFO: Dumping Scrapy stats:
        {'downloader/request_bytes': 494,
         'downloader/request_count': 2,
         'downloader/request_method_count/GET': 2,
         'downloader/response_bytes': 13196,
         'downloader/response_count': 2,
         'downloader/response_status_count/200': 1,
         'downloader/response_status_count/301': 1,
         'finish_reason': 'finished',
         'finish_time': datetime.datetime(2015, 1, 20, 16, 18, 40, 161000),
         'log_count/DEBUG': 4,
         'log_count/INFO': 7,
         'response_received_count': 1,
         'scheduler/dequeued': 2,
         'scheduler/dequeued/memory': 2,
         'scheduler/enqueued': 2,
         'scheduler/enqueued/memory': 2,
         'start_time': datetime.datetime(2015, 1, 20, 16, 18, 37, 942000)}
2015-01-20 18:18:40+0200 [craigs] INFO: Spider closed (finished)

1 个答案:

答案 0 :(得分:0)

我有同样的问题,我不知道如何使用规则,它只是不起作用。 这是我的解决方案:

next_urls = htmlData.xpath('//attribute::href').extract()
rootURL = 'http://www.kaggle.com'
    for url in next_urls:
        if str(url.encode('utf8')).startswith("http"):
            pass
        elif str(url.encode('utf8')).startswith("/"):
            url = rootURL + url.encode('utf8')
        else:
            continue
        print '----url:',url
        yield Request(url, callback=self.parse)