我正在尝试从公司注册簿中抓取一些数据,到目前为止,它可以抓取每个搜索结果,但是当我尝试导出它时。它会在每个搜索结果之后显示一个空对象,好像它会刮擦同一页两次?
这是日志的摘要。
2019-05-14 08:19:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.companiesintheuk.co.uk/ltd/a-c-1> (referer: https://www.companiesintheuk.co.uk/Company/Find?q=a)
2019-05-14 08:19:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.companiesintheuk.co.uk/ltd/a-c-1>
{'location': u'BEANCROFT ROAD', 'postal_code': None, 'company_name': u'A C PLC', 'address': u'BEANCROFT FARM'}
2019-05-14 08:19:21 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.companiesintheuk.co.uk/ltd/a-c-1>
{'location': None, 'postal_code': None, 'company_name': None, 'address': None}
最后是我的代码
import scrapy
import re
from scrapy.linkextractors import LinkExtractor
class QuotesSpider(scrapy.Spider):
name = 'CYRecursive'
start_urls = [
'https://www.companiesintheuk.co.uk/Company/Find?q=a']
def parse(self, response):
for company_url in response.xpath('//div[@class="search_result_title"]/a/@href').extract():
yield scrapy.Request(
url=response.urljoin(company_url),
callback=self.parse_details,
)
def parse_details(self, response):
# Looping throught the searchResult block and yielding it
for i in response.css('div.col-md-6'):
yield {
'company_name': i.css('#content2 > strong:nth-child(2) > strong:nth-child(1) > div:nth-child(1)::text').get(),
'address': i.css("#content2 > strong:nth-child(2) > address:nth-child(2) > div:nth-child(1) > span:nth-child(1)::text").extract_first(),
'location': i.css("#content2 > strong:nth-child(2) > address:nth-child(2) > div:nth-child(1) > span:nth-child(3)::text").extract_first(),
'postal_code': i.css("#content2 > strong:nth-child(2) > address:nth-child(2) > div:nth-child(1) > a:nth-child(5) > span:nth-child(1)::text").extract_first(),
}
提前谢谢!
答案 0 :(得分:2)
您有两个元素div.col-md-6
,每个公司页面一个(例如:https://www.companiesintheuk.co.uk/ltd/a-c-1)。因此,第一个包含公司详细信息,第二个包含地图而没有公司数据。
因此,您可以使用以下代码修改代码:
def parse_details(self, response):
for i in response.css('div.col-md-6'):
if not i.css('#content2 > strong:nth-child(2) > strong:nth-child(1)'):
continue
yield {
'company_name': i.css('#content2 > strong:nth-child(2) > strong:nth-child(1) > div:nth-child(1)::text').get(),
'address': i.css("#content2 > strong:nth-child(2) > address:nth-child(2) > div:nth-child(1) > span:nth-child(1)::text").extract_first(),
'location': i.css("#content2 > strong:nth-child(2) > address:nth-child(2) > div:nth-child(1) > span:nth-child(3)::text").extract_first(),
'postal_code': i.css("#content2 > strong:nth-child(2) > address:nth-child(2) > div:nth-child(1) > a:nth-child(5) > span:nth-child(1)::text").extract_first(),
}
因此,只需跳过最初不需要阻止的项目即可。