基本上问题在于链接
我要从第1..2..3..4..5 ......总共90页开始
每个页面都有100个左右的链接
每个页面都采用这种格式
http://www.consumercomplaints.in/lastcompanieslist/page/1
http://www.consumercomplaints.in/lastcompanieslist/page/2
http://www.consumercomplaints.in/lastcompanieslist/page/3
http://www.consumercomplaints.in/lastcompanieslist/page/4
这是正则表达式匹配规则
Rule(LinkExtractor(allow='(http:\/\/www\.consumercomplaints\.in\/lastcompanieslist\/page\/\d+)'),follow=True,callback="parse_data")
我要去每个页面,然后创建一个Request
对象来抓取每个页面中的所有链接
Scrapy每次只抓取179个链接,然后提供finished
状态
我做错了什么?
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
import urlparse
class consumercomplaints_spider(CrawlSpider):
name = "test_complaints"
allowed_domains = ["www.consumercomplaints.in"]
protocol='http://'
start_urls = [
"http://www.consumercomplaints.in/lastcompanieslist/"
]
#These are the rules for matching the domain links using a regularexpression, only matched links are crawled
rules = [
Rule(LinkExtractor(allow='(http:\/\/www\.consumercomplaints\.in\/lastcompanieslist\/page\/\d+)'),follow=True,callback="parse_data")
]
def parse_data(self, response):
#Get All the links in the page using xpath selector
all_page_links = response.xpath('//td[@class="compl-text"]/a/@href').extract()
#Convert each Relative page link to Absolute page link -> /abc.html -> www.domain.com/abc.html and then send Request object
for relative_link in all_page_links:
print "relative link procesed:"+relative_link
absolute_link = urlparse.urljoin(self.protocol+self.allowed_domains[0],relative_link.strip())
request = scrapy.Request(absolute_link,
callback=self.parse_complaint_page)
return request
return {}
def parse_complaint_page(self,response):
print "SCRAPED"+response.url
return {}