为什么蜘蛛不回复此网站的任何回复?

时间:2015-12-13 18:03:21

标签: python web-scraping web-crawler scrapy

我正在使用scrapy来废弃this site但是当我运行蜘蛛时,我没有看到任何反应。

我尝试了reddit.com和quora.com,他们都返回了数据(开始抓取),但没有返回我想要的网站。

这是我的简单蜘蛛:

from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider
from scrapy.spiders import Rule

class FirstSpider(CrawlSpider):
    name = "jobs"
    allowed_domains = ["bayt.com"]
    start_urls = (
                  'http://www.bayt.com/',
                 )

    rules = [
                Rule(
                    LinkExtractor(allow=['.*']),
                )
    ]

我在start_urls中尝试了几种url组合,但似乎没有任何效果。

这是运行蜘蛛后的日志:

2015-12-13 20:31:45 [scrapy] INFO: Scrapy 1.0.3 started (bot: bayt)
2015-12-13 20:31:45 [scrapy] INFO: Optional features available: ssl, http11
2015-12-13 20:31:45 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'bayt.spiders', 'SPIDER_MODULES': ['bayt.spiders'], 'BOT_NAME': 'bayt'}
2015-12-13 20:31:45 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2015-12-13 20:31:45 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-12-13 20:31:45 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-12-13 20:31:45 [scrapy] INFO: Enabled item pipelines: 
2015-12-13 20:31:45 [scrapy] INFO: Spider opened
2015-12-13 20:31:45 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-12-13 20:31:45 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-12-13 20:31:45 [scrapy] DEBUG: Redirecting (302) to <GET http://www.bayt.com/en/jordan/> from <GET http://www.bayt.com/>
2015-12-13 20:31:46 [scrapy] DEBUG: Crawled (200) <GET http://www.bayt.com/en/jordan/> (referer: None)
2015-12-13 20:31:46 [scrapy] INFO: Closing spider (finished)
2015-12-13 20:31:46 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 881,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 2320,
'downloader/response_count': 2,
'downloader/response_status_count/200': 1,
'downloader/response_status_count/302': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 12, 13, 18, 31, 46, 212468),
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2015, 12, 13, 18, 31, 45, 138408)}
2015-12-13 20:31:46 [scrapy] INFO: Spider closed (finished)   

2 个答案:

答案 0 :(得分:1)

问题是你没有按照你提到的那样使用规则,你有自己的parse方法,这是不行的,CrawlSpider使用parse方法所以你不应该&# 39; t覆盖该方法。

现在,如果您在覆盖parse方法时仍然获取项目,那是因为parsestart_urls请求的默认方法,因此请求实际上并不是规则,但只在start_urls

内抓取网址

只需将解析方法的名称从parse更改为其他名称,然后在规则中将其指定为callback

答案 1 :(得分:0)

我在命令行中执行了Curl www.bayt.com,似乎他们将请求重定向到http://www.bayt.com/en/jordan/

我把它作为我的start_urls并且它工作并将settings.py中的用户代理更改为localhost并且它有效。