在这里刮新手。我使用Scrapy从单个站点获取大量数据。当我运行脚本时,它可以正常运行几分钟,但随后会慢下来,只是停止并不断地抛出以下错误,并尝试使用不同的URL:
2013-07-20 14:15:17-0700 [billboard_spider] DEBUG: Retrying <GET http://www.billboard.com/charts/1981-01-17/hot-100> (failed 1 times): Getting http://www.billboard.com/charts/1981-01-17/hot-100 took longer than 180 seconds.
2013-07-20 14:16:56-0700 [billboard_spider] DEBUG: Crawled (502) <GET http://www.billboard.com/charts/1981-01-17/hot-100> (referer: None)
以上错误会堆积不同的网址,而且我不确定导致它的原因......
这是脚本:
import datetime
from scrapy.item import Item, Field
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
class BillBoardItem(Item):
date = Field()
song = Field()
artist = Field()
BASE_URL = "http://www.billboard.com/charts/%s/hot-100"
class BillBoardSpider(BaseSpider):
name = "billboard_spider"
allowed_domains = ["billboard.com"]
def __init__(self):
date = datetime.date(year=1975, month=12, day=27)
self.start_urls = []
while True:
if date.year >= 2013:
break
self.start_urls.append(BASE_URL % date.strftime('%Y-%m-%d'))
date += datetime.timedelta(days=7)
def parse(self, response):
hxs = HtmlXPathSelector(response)
date = hxs.select('//span[@class="chart_date"]/text()').extract()[0]
songs = hxs.select('//div[@class="listing chart_listing"]/article')
item = BillBoardItem()
item['date'] = date
for song in songs:
try:
track = song.select('.//header/h1/text()').extract()[0]
track = track.rstrip()
item['song'] = track
item['artist'] = song.select('.//header/p[@class="chart_info"]/a/text()').extract()[0]
break
except:
continue
yield item
答案 0 :(得分:3)
蜘蛛为我工作并且没有任何问题地刮擦数据。所以,正如@Tiago所假设的那样,你被禁止了。
将来阅读how to avoid getting banned并适当调整您的scrapy设置。我首先尝试增加DOWNLOAD_DELAY
并轮换你的IP。
另外,请考虑切换到使用真实的自动浏览器,例如selenium。
另外,看看您是否可以从RSS XML Feed中获取日期:http://www.billboard.com/rss。
希望有所帮助。