我正在尝试使用固定方式抓取网页,但回电时不会解析项目,将不胜感激。...这里是代码
# -*- coding: utf-8 -*-
import scrapy
from ..items import EscrotsItem
class Escorts(scrapy.Spider):
name = 'escorts'
allowed_domains = ['www.escortsandbabes.com.au']
start_urls = ['https://escortsandbabes.com.au/Directory/ACT/Canberra/2600/Any/All/']
def parse_links(self, response):
for i in response.css('.btn.btn-default.btn-block::attr(href)').extract()[2:]:
yield scrapy.Request(url=response.urljoin(i),callback=self.parse)
NextPage = response.css('.page.next-page::attr(href)').extract_first()
if NextPage:
yield scrapy.Request(
url=response.urljoin(NextPage),
callback=self.parse_links)
def parse(self, response):
for x in response.xpath('//div[@class="advertiser-profile"]'):
item = EscrotsItem()
item['Name'] = x.css('.advertiser-names--display-name::text').extract_first()
item['Username'] = x.css('.advertiser-names--username::text').extract_first()
item['Phone'] = x.css('.contact-number::text').extract_first()
yield item
答案 0 :(得分:1)
您的代码从start_urls
调用url,然后转到parse
函数。由于没有任何div.advertiser-profile
元素,因此它实际上应该关闭而没有任何结果。因此,根本不会调用您的parse_links
函数。
更改函数名称:
import scrapy
class Escorts(scrapy.Spider):
name = 'escorts'
allowed_domains = ['escortsandbabes.com.au']
start_urls = ['https://escortsandbabes.com.au/Directory/ACT/Canberra/2600/Any/All/']
def parse(self, response):
for i in response.css('.btn.btn-default.btn-block::attr(href)').extract()[2:]:
yield scrapy.Request(response.urljoin(i), self.parse_links)
next_page = response.css('.page.next-page::attr(href)').get()
if next_page:
yield scrapy.Request(response.urljoin(next_page))
def parse_links(self, response):
for x in response.xpath('//div[@class="advertiser-profile"]'):
item = {}
item['Name'] = x.css('.advertiser-names--display-name::text').get()
item['Username'] = x.css('.advertiser-names--username::text').get()
item['Phone'] = x.css('.contact-number::text').get()
yield item
我从scrapy shell中获取的日志:
In [1]: fetch("https://escortsandbabes.com.au/Directory/ACT/Canberra/2600/Any/All/")
2019-03-29 15:22:56 [scrapy.core.engine] INFO: Spider opened
2019-03-29 15:23:00 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://escortsandbabes.com.au/Directory/ACT/Canberra/2600/Any/All/> (referer: None, latency: 2.48 s)
In [2]: response.css('.page.next-page::attr(href)').get()
Out[2]: u'/Directory/ACT/Canberra/2600/Any/All/?p=2'