答案 0 :(得分:0)
您运行作者网址的请求两次。第一次刮掉作者名单。第二次刮掉当前作者的详细信息。倾倒Scrapy统计数据(在记录结束时)显示“dupefilter / filtered”计数。这意味着scrapy过滤了重复的URL。如果删除“parse_content”函数并编写如下代码,则刮擦将起作用:
def parse(self,response):
if 'tags' in response.meta:
author = {}
author['url'] = response.url
name = response.css(".people-name::text").extract()
join_date = response.css(".joined-time::text").extract()
following_no = response.css(".following-number::text").extract()
followed_no = response.css(".followed-number::text").extract_first()
first_onsale = response.css(".first-onsale-date::text").extract()
total_no = response.css(".total-number::text").extract()
comments = total_no[0]
onsale = total_no[1]
columns = total_no[2]
ebooks = total_no[3]
essays = total_no[4]
author['tags'] = response.meta['tags']
author['name'] = name
author['join_date'] = join_date
author['following_no'] = following_no
author['followed_no'] = followed_no
author['first_onsale'] = first_onsale
author['comments'] = comments
author['onsale'] = onsale
author['columns'] = columns
author['ebooks'] = ebooks
author['essays'] = essays
yield author
authors = response.css('section.following-agents ul.bd li.item')
for author in authors:
tags = author.css('div.author-tags::text').extract_first()
url = author.css('a.lnk-avatar::attr(href)').extract_first()
yield response.follow(url=url, callback=self.parse, meta={'tags': tags})
小心点。我在测试过程中删除了一些行。您需要在HTTP标头中使用随机代理,请求延迟或代理。我运行集合,现在我得到了“403 Forbidden”状态代码。