大家好〜我是Scrapy的新手,我遇到了一个非常奇怪的问题。简单地说,我发现scrapy.Request()阻止我进入我的功能 这是我的代码:
# -*- coding: utf-8 -*-
import scrapy
from tutor_job_spy.items import TutorJobSpyItem
class Spyspider(scrapy.Spider):
name = 'spy'
#for privacy reasons I delete the url information :)
allowed_domains = ['']
url_0 = ''
start_urls = [url_0, ]
base_url = ''
list_previous = []
list_present = []
def parse(self, response):
numbers = response.xpath( '//tr[@bgcolor="#d7ecff" or @bgcolor="#eef7ff"]/td[@width="8%" and @height="40"]/span/text()').extract()
self.list_previous = numbers
self.list_present = numbers
yield scrapy.Request(self.url_0, self.keep_spying)
def keep_spying(self, response):
numbers = response.xpath('//tr[@bgcolor="#d7ecff" or @bgcolor="#eef7ff"]/td[@width="8%" and @height="40"]/span/text()').extract()
self.list_previous = self.list_present
self.list_present = numbers
# judge if anything new
if (self.list_present != self.list_previous):
self.goto_new_demand(response)
#time.sleep(60) #from cache
yield scrapy.Request(self.url_0, self.keep_spying, dont_filter=True)
def goto_new_demand(self, response):
new_demand_links = []
detail_links = response.xpath('//div[@class="ShowDetail"]/a/@href').extract()
for i in range(len(self.list_present)):
if (self.list_present[ i] not in self.list_previous):
new_demand_links.append(self.base_url + detail_links[i])
if (new_demand_links != []):
for new_demand_link in new_demand_links:
yield scrapy.Request(new_demand_link, self.get_new_demand)
def get_new_demand(self, response):
new_demand = TutorJobSpyItem()
new_demand['url'] = response.url
requirments = response.xpath('//tr[@#bgcolor="#eef7ff"]/td[@colspan="2"]/div/text()').extract()[0]
new_demand['gender'] = self.get_gender(requirments)
new_demand['region'] = response.xpath('//tr[@bgcolor="#d7ecff"]/td[@align="left"]/text()').extract()[5]
new_demand['grade'] = response.xpath('//tr[@bgcolor="#d7ecff"]/td[@align="left"]/text()').extract()[7]
new_demand['subject'] = response.xpath('//tr[@bgcolor="#eef7ff"]/td[@align="left"]/text()').extract()[2]
return new_demand
def get_gender(self, requirments):
if ('女老师' in requirments):
return 'F'
elif ('男老师' in requirments):
return 'M'
else:
return 'Both okay'
问题在于,当我调试时,我发现我无法进入 goto_new_demand :
if (self.list_present != self.list_previous):
self.goto_new_demand(response)
每次运行脚本或调试脚本时,它都会跳过 goto_new_demand ,但在 goto_new_demand 中注释yield scrapy.Request(new_demand_link, self.get_new_demand)
之后我就可以进入它了。我已多次尝试,发现只有在其中没有yyield scrapy.Request(new_demand_link, self.get_new_demand)
时我才能进入 goto_new_demand 。
为什么会这样?
提前感谢任何可以提出建议的人:)
PS:
Scrapy:1.5.0
lxml:4.1.1.0
libxml2:2.9.5
cssselect:1.0.3
parsel:1.3.1
w3lib:1.18.0
扭曲:17.9.0
Python:3.6.3(v3.6.3:2c5fed8,2017年10月3日,18:11:49)[MSC v.1900 64 bit(AMD64)]
pyOpenSSL:17.5.0(OpenSSL 1.1.0g 2017年11月2日)
密码学:2.1.4
平台:Windows-7-6.1.7601-SP1
问题解决了!
我将生成器 goto_new_demand 修改为功能 goto_new_demand 。所以问题完全是由于我对产量 生成器的理解不足。
以下是修改后的代码:
if (self.list_present != self.list_previous):
# yield self.goto_new_demand(response)
new_demand_links = self.goto_new_demand(response)
if (new_demand_links != []):
for new_demand_link in new_demand_links:
yield scrapy.Request(new_demand_link, self.get_new_demand)
def goto_new_demand(self, response):
new_demand_links = []
detail_links = response.xpath('//div[@class="ShowDetail"]/a/@href').extract()
for i in range(len(self.list_present)):
if (self.list_present[ i] not in self.list_previous):
new_demand_links.append(self.base_url + detail_links[i])
return new_demand_links
原因在于巴拉克的答案。
答案 0 :(得分:0)
documentation中描述了调试Scrapy蜘蛛的正确方法。特别有用的技术是使用Scrapy Shell来检查响应。
答案 1 :(得分:0)
我认为您可能需要更改此声明
if (self.list_present != self.list_previous):
self.goto_new_demand(response)
为:
if (self.list_present != self.list_previous):
yield self.goto_new_demand(response)
因为self.goto_new_demand()
只是一个生成器(函数中有yield语句),所以简单地使用self.goto_new_demand(response)
就不会运行任何东西。
生成器的一个简单示例可能会让您更清楚:
def a():
print("hello")
# invoke a will print out hello
a()
但是对于生成器,只需调用它就会返回一个生成器:
def a():
yield
print("hello")
# invoke a will not print out hello, instead it will return a generator object
a()
因此,在scrapy中,您应该使用yield self.goto_new_demand(response)
使goto_new_demand(response)
实际运行。