我已经通过脚本实现了我的蜘蛛,就像主要的例子一样:
import scrapy
class BlogSpider(scrapy.Spider):
name = 'blogspider'
start_urls = ['https://blog.scrapinghub.com']
def parse(self, response):
for title in response.css('h2.entry-title'):
yield {'title': title.css('a ::text').extract_first()}
next_page = response.css('div.prev-post > a ::attr(href)').extract_first()
if next_page:
yield scrapy.Request(response.urljoin(next_page), callback=self.parse)
我跑步:
scrapy runspider myspider.py
如果我没有设置或从startproject创建,如何更改用户代理?如此处所述:
答案 0 :(得分:1)
将USER_AGENT
添加到settings.py
文件中:
USER_AGENT = "custom_user_agent"
您也可以使用
更改USER_AGENT
到cmdline
scrapy runspider myspider.py -s USER_AGENT="custom_user_agent"
答案 1 :(得分:1)
您可以在请求中手动添加标题,以便指定自定义User Agent
。
在您的蜘蛛文件中,当您请求时:
yield scrapy.Request(self.start_urls, callback=self.parse, headers={"User-Agent": "Your Custom User Agent"})
所以你的蜘蛛会是这样的:
class BlogSpider(scrapy.Spider):
name = 'blogspider'
start_urls = ['https://blog.scrapinghub.com']
def start_requests(self):
yield scrapy.Request(self.start_urls, callback=self.parse, headers={"User-Agent": "Your Custom User Agent"})
def parse(self, response):
for title in response.css('h2.entry-title'):
yield {'title': title.css('a ::text').extract_first()}
next_page = response.css('div.prev-post > a ::attr(href)').extract_first()
if next_page:
yield scrapy.Request(response.urljoin(next_page), callback=self.parse, headers={"User-Agent": "Your Custom User Agent"})