我的代码正在运行,但是idk确切地说它是如何工作的,然后我需要扩展此功能。
我想在登录后使用相同的URL通过POST方法循环。
class myspider(scrapy.Spider):
name = 'myspider'
start_urls = ['login_url']
target_urls = 'target_url'
# sent data
def parse(self, response):
return scrapy.FormRequest.from_response(
response,
formdata={'user': 'x', 'pass': 'y'},
callback=self.after_login
)
# responds after login form sent
def after_login(self, response):
if "authentication failed" in response.text:
self.log("Login failed", level=log.ERROR)
return
hxs = scrapy.Selector(response)
yum = hxs.xpath('//span[@id="userName"]/text()').get()
# responds after login result extracted
@classmethod
def from_crawler(cls, crawler, *args, **kwargs):
spider = super(spider_new_fee, cls).from_crawler(crawler, *args, **kwargs)
crawler.signals.connect(spider.spider_idle,
signal=scrapy.signals.spider_idle)
return spider
# Second parsing
def spider_idle(self):
self.crawler.signals.disconnect(self.spider_idle,
signal=scrapy.signals.spider_idle)
mydata={'param1': param1, 'param2': param2, 'param3': 'param3'}
self.crawler.engine.crawl(scrapy.Request(
url_target,
method='POST',
body=json.dumps(mydata),
headers={'Content-Type':'application/json'},
callback=self.parse_page2
), self)
raise DontCloseSpider
# Extract second parsing
def parse_page2(self, response):
self.logger.info("Visited %s", response.url)
hxs = scrapy.Selector(response)
root = lxml.html.fromstring(response.body)
lxml.etree.strip_elements(root, lxml.etree.Comment, "script", "head")
try:
data= lxml.html.tostring(root, method="text", encoding=str)
except Exception as e:
data= lxml.html.tostring(root, method="text", encoding=unicode)
texts = json.loads(data)
res={}
# do something with result
return res
此代码有效,我正在登录,然后使用登录名注销下一个URL。成功登录,并成功在url中获取结果项,然后在第一个(空闲方法)此废弃下一个URL之后,最后进行解析结果。
但是idk,这是登录后报废的最佳方法吗?还有处理此目的的任何更成熟的代码吗?是否有很好的技术说明(我的解释太简单了)?最后,该代码如何可以进一步报废用不同的POST请求迭代target_url,我想添加一个空闲方法,但仍然失败,
一些失败的尝试:
multi_param = self.allparam.split("-")
for param in multi_param:
self.logger.info("Visited %s", target_url)
mydata={'param1': param1, 'param2': param2, 'param3': 'param3'}
self.crawler.engine.crawl(scrapy.Request(
url=target_url,
method='POST',
body=json.dumps(mydata),
dont_filter=True,
callback=self.parse_page2
), self)
另一个失败的尝试:
我正在删除函数类方法,并且在登录后我添加了另一个剪贴簿,但是失败了,因为它没有在登录中获得会话。.:(
感谢您的帮助,
答案 0 :(得分:0)
我认为您需要在spider_idle
中添加一些发送第二个请求的代码,
类似:
def spider_idle(self):
allparam = self.listparam.split("-")
for param in allparam:
mydata={'param1': param}
self.crawler.engine.crawl(scrapy.Request(
url=self.target_url,
method='POST',
body=json.dumps(my_data),
dont_filter=True,
headers={'Content-Type':'application/json'},
callback=self.parse_page2
), self)
此code
会迭代您的请求POST,希望有帮助。