我试图抓住http://www.yhd.com的网站,并在那里刮取价格和产品ID。这是我的spider / test.py文件。但它似乎根本没有下载。我不知道为什么。
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from try_yhd.items import TryYhdItem
class MySpider(CrawlSpider):
name = "yhdspider"
allowed_domains = ["http://www.yihaodian.com.yhcdn.cn"]
start_urls = ['http://item.yhd.com/item/11271079',
'http://item.yhd.com/item/2149386',
]
rules = [Rule(SgmlLinkExtractor(allow=['/item/\d+']),'parse_torrent',follow = True),]
def parse_items(self, response):
hxs = HtmlXPathSelector(response)
item = TryYhdItem()
# find the price and product id.
item['price']= hxs.select("//span[@id='current_price']").extract()[0]
item['id']= hxs.select("//p[@class='product_id']/text()").extract()[0]
return item
这是输出。
2014-09-22 10:18:31-0500 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2014-09-22 10:18:31-0500 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2014-09-22 10:18:32-0500 [yhdspider] DEBUG: Crawled (200) <GET http://item.yhd.com /item/11271079> (referer: None)
2014-09-22 10:18:32-0500 [yhdspider] DEBUG: Filtered offsite request to 'item.yhd.com': <GET http://item.yhd.com/item/11271079>
2014-09-22 10:18:32-0500 [yhdspider] DEBUG: Crawled (200) <GET http://item.yhd.com/item/2149386> (referer: None)
2014-09-22 10:18:32-0500 [yhdspider] INFO: Closing spider (finished)
2014-09-22 10:18:32-0500 [yhdspider] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 447,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 68145,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 9, 22, 15, 18, 32, 892277),
'log_count/DEBUG': 5,
'log_count/INFO': 7,
'offsite/domains': 1,
'offsite/filtered': 2,
'request_depth_max': 1,
'response_received_count': 2,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2014, 9, 22, 15, 18, 31, 211841)}
2014-09-22 10:18:32-0500 [yhdspider] INFO: Spider closed (finished)
修改后,我得到以下输出日志。谁能告诉我有什么问题?
答案 0 :(得分:1)
您需要将item.yhd.com添加到allowed_domains。这些请求将被默认启用的OffsiteMiddleware
中间件过滤为异地。
'offsite/domains': 1,
'offsite/filtered': 2,
此中间件会过滤掉主机名不在spider的allowed_domains属性中的每个请求。
你有几个选择。如果spider没有定义allowed_domains属性,或者属性为空,则异地中间件将允许所有请求。
如果请求设置了dont_filter属性,则异地中间件将允许该请求,即使其域未在允许的域中列出。