我一直在尝试递归地从公司注册簿中抓取一些搜索结果。多数情况下都可以,但是我注意到我在导出中错过了很多搜索结果。当我尝试仅刮取1个页面时,我注意到它确实设法找到了搜索结果页面,但是以某种方式尝试重新输入它已经在其中的页面?它仅执行少数操作。第一个结果很好,并且合格。我检查我的css路径,这很好。你明白为什么吗?预先非常感谢。
这是我的错误日志:
> 2019-05-13 08:25:37 [scrapy.core.engine] DEBUG: Crawled (200) <GET
> https://www.companiesintheuk.co.uk/ltd/aw> (referer:
> https://www.companiesintheuk.co.uk/Company/Find?q=a) 2019-05-13
> 08:25:38 [scrapy.core.scraper] ERROR: Spider error processing <GET
> https://www.companiesintheuk.co.uk/ltd/aw> (referer:
> https://www.companiesintheuk.co.uk/Company/Find?q=a) Traceback (most
> recent call last): File
> "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line
> 102, in iter_errback
> yield next(it) File "/usr/local/lib/python2.7/dist-packages/scrapy/spidermiddlewares/offsite.py",
> line 29, in process_spider_output
> for x in result: File "/usr/local/lib/python2.7/dist-packages/scrapy/spidermiddlewares/referer.py",
> line 339, in <genexpr>
> return (_set_referer(r) for r in result or ()) File "/usr/local/lib/python2.7/dist-packages/scrapy/spidermiddlewares/urllength.py",
> line 37, in <genexpr>
> return (r for r in result or () if _filter(r)) File "/usr/local/lib/python2.7/dist-packages/scrapy/spidermiddlewares/depth.py",
> line 58, in <genexpr>
> return (r for r in result or () if _filter(r)) File "/root/Desktop/zakaria/gov2/gov2/spiders/CYRecursive.py", line 41, in
> parse_details
> 'postal_code': re.sub('\s+', ' ', ''.join(i.css("#content2 > strong:nth-child(2) > address:nth-child(2) > div:nth-child(1) >
> a:nth-child(5) > span:nth-child(1)::text").extract_first())),
> TypeError: can only join an iterable
这是我的代码:
import scrapy
import re
from scrapy.linkextractors import LinkExtractor
class QuotesSpider(scrapy.Spider):
name = 'CYRecursive'
start_urls = [
'https://www.companiesintheuk.co.uk/Company/Find?q=a']
def parse(self, response):
for company_url in response.xpath('//div[@class="search_result_title"]/a/@href').extract():
yield scrapy.Request(
url=response.urljoin(company_url),
callback=self.parse_details,
)
# next_page_url = response.xpath(
# '//li/a[@class="pageNavNextLabel"]/@href').extract_first()
# if next_page_url:
# yield scrapy.Request(
# url=response.urljoin(next_page_url),
# callback=self.parse,
# )
def parse_details(self, response):
# Looping throught the searchResult block and yielding it
for i in response.css('div.col-md-6'):
yield {
'company_name': re.sub('\s+', ' ', ''.join(i.css('#content2 > strong:nth-child(2) > strong:nth-child(1) > div:nth-child(1)::text').get())),
'address': re.sub('\s+', ' ', ''.join(i.css("#content2 > strong:nth-child(2) > address:nth-child(2) > div:nth-child(1) > span:nth-child(1)::text").extract_first())),
'location': re.sub('\s+', ' ', ''.join(i.css("#content2 > strong:nth-child(2) > address:nth-child(2) > div:nth-child(1) > span:nth-child(3)::text").extract_first())),
'postal_code': re.sub('\s+', ' ', ''.join(i.css("#content2 > strong:nth-child(2) > address:nth-child(2) > div:nth-child(1) > a:nth-child(5) > span:nth-child(1)::text").extract_first())),
}