正如标题中提到的,我使用scrapy从特定页面(一个网站/一个域)中提取数据。现在我想描述一下,我的代码将要做什么。
抓取工具从波兰最著名的汽车广告门户网站之一 - OTOMOTO (https://otomoto.pl) 收集数据。它从特定的优惠页面中提取数据。
首先,用户输入汽车品牌,然后输入汽车型号。程序检查是否有这样的品牌/品牌(品牌列表/数组是之前收集的 - 使用 BeautifulSoup;然后提取汽车模型列表 - 也使用 BeautifulSoup)。它工作得很好。
在此代码的最终版本中,我还将收集一些额外的参数/过滤器,例如价格范围或生产年份范围。但让我们暂时离开它。这只是关于链接创建。
如果满足这些条件,则会在上述条件下(使用品牌和型号)创建包含优惠列表的 URL,如下所示:https://otomoto.pl/osobowe/BRAND/MODEL(例如:{{3} })。这是收集此链接内特定优惠的 URL 的起点。我只收集第一页的结果,不想过多地利用服务器。这就是为什么在这种情况下不需要实施用于抓取的规则。
然后,BeautifulSoup 找到满足offer page 模式的URL:https://otomoto.pl/osobowe/opel/corsa(例如:https://otomoto.pl/oferta/BRAND-MODEL-blah-blah-blah) - 然后添加到数组中。此数组变量设置为全局变量。
此时,我遇到了一个问题。因为我想从这个特定页面获取数据(如里程、颜色、发动机功率等)——所有这些都是发布广告所必需的,所以不需要检查该变量是否出现在页面上。然后这些数据应该分配给一个特定的对象(一个“汽车”)——这就是为什么我想到了 Scrapy 的 Item 扩展。 (我想在最后保存为 json 文件或 csv - 但让我们暂时保留它)。
简而言之:这个蜘蛛只是从定义的网站(根据用户的限制)收集数据,这些数据存储在动态创建的数组中。
我所做的:
class otoMotoCarObjects(scrapy.Item):
url = scrapy.Field()
offerID = scrapy.Field()
addDate = scrapy.Field()
offerType = scrapy.Field()
brand = scrapy.Field()
model = scrapy.Field()
price = scrapy.Field()
productionYear = scrapy.Field()
mileage = scrapy.Field()
fuelType = scrapy.Field()
power = scrapy.Field()
cubicCapacity = scrapy.Field()
gearbox = scrapy.Field()
driveType = scrapy.Field()
color = scrapy.Field()
countryImport = scrapy.Field()
location = scrapy.Field()
state = scrapy.Field()
class otoMotoCarObjects(scrapy.Item):
url = scrapy.Field()
offerID = scrapy.Field()
addDate = scrapy.Field()
offerType = scrapy.Field()
brand = scrapy.Field()
model = scrapy.Field()
price = scrapy.Field()
productionYear = scrapy.Field()
mileage = scrapy.Field()
fuelType = scrapy.Field()
power = scrapy.Field()
cubicCapacity = scrapy.Field()
gearbox = scrapy.Field()
driveType = scrapy.Field()
color = scrapy.Field()
countryImport = scrapy.Field()
location = scrapy.Field()
state = scrapy.Field()
import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.spiders import CrawlSpider, Rule
from scrapy.http import Request
from scrapy.linkextractors import LinkExtractor
值得一提的是,评论部分的代码(返回scrapy.Request ...)也不起作用,但我留下它供您参考。 (也许它是好的,但诱导不当?) - 我将它与 yield
互换使用使用以下代码在 main 函数中启动此抓取工具:
import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.spiders import CrawlSpider, Rule
from scrapy.http import Request
from scrapy.linkextractors import LinkExtractor
(其中,
表示此抓取工具所在的导入文件,class otoMotoCarScraper(CrawlSpider):
name = 'car'
allowed_domains = ['otomoto.pl']
def __init__(self, urls=[], *args, **kwargs):
super(otoMotoCarScraper, self).__init__(*args, **kwargs)
self.start_urls = urls
def start_requests(self):
for url in self.start_urls:
yield Request(url, callback=self.parse_items)
"""
return [scrapy.Request(url=url, callback=self.parse)
for url in self.start_urls]
"""
def parse_items(self, response):
car = otoMotoCarObjects()
car['url'] = response.url
print(car['url'])
car['offerID'] = response.xpath('//div[@id="ad-id"]//text()').extract_first()
print(car['offerID'])
car['addDate'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[2]/div[1]/div[2]/div[4]/span[3]/text()').extract()
car['offerType'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[1]/div/a/text()').extract()
car['brand'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[3]/div/a/text()').extract()
car['model'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[4]/div/a/text()').extract()
car['price'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[2]/div[1]/div[1]/div[2]/div/span[1]/text()').extract()
car['productionYear'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[5]/div/text()').extract()
car['mileage'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[6]/div/text()').extract()
car['fuelType'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[8]/div/a/text()').extract()
car['power'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[9]/div/text()').extract()
car['cubicCapacity'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[7]/div/text()').extract()
car['gearbox'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[10]/div/a/text()').extract()
car['driveType'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[11]/div/a/text()').extract()
car['color'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[2]/li[3]/div/a/text()').extract()
car['countryImport'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[2]/li[8]/div/a/text()').extract()
car['location'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[2]/div[1]/div[2]/div[4]/span[3]/text()').extract()
car['state'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[2]/li[10]/div/a/text()').extract()
yield car
process = CrawlerProcess()
class otoMotoCarScraper(CrawlSpider):
name = 'car'
allowed_domains = ['otomoto.pl']
def __init__(self, urls=[], *args, **kwargs):
super(otoMotoCarScraper, self).__init__(*args, **kwargs)
self.start_urls = urls
def start_requests(self):
for url in self.start_urls:
yield Request(url, callback=self.parse_items)
"""
return [scrapy.Request(url=url, callback=self.parse)
for url in self.start_urls]
"""
def parse_items(self, response):
car = otoMotoCarObjects()
car['url'] = response.url
print(car['url'])
car['offerID'] = response.xpath('//div[@id="ad-id"]//text()').extract_first()
print(car['offerID'])
car['addDate'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[2]/div[1]/div[2]/div[4]/span[3]/text()').extract()
car['offerType'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[1]/div/a/text()').extract()
car['brand'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[3]/div/a/text()').extract()
car['model'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[4]/div/a/text()').extract()
car['price'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[2]/div[1]/div[1]/div[2]/div/span[1]/text()').extract()
car['productionYear'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[5]/div/text()').extract()
car['mileage'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[6]/div/text()').extract()
car['fuelType'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[8]/div/a/text()').extract()
car['power'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[9]/div/text()').extract()
car['cubicCapacity'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[7]/div/text()').extract()
car['gearbox'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[10]/div/a/text()').extract()
car['driveType'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[11]/div/a/text()').extract()
car['color'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[2]/li[3]/div/a/text()').extract()
car['countryImport'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[2]/li[8]/div/a/text()').extract()
car['location'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[2]/div[1]/div[2]/div[4]/span[3]/text()').extract()
car['state'] = response.xpath(
'/html/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[2]/li[10]/div/a/text()').extract()
yield car
process = CrawlerProcess()
表示导入的文件,其中定义了函数,因此存储了 scr.otoMotoCarScraper.process.crawl(scr.otoMotoCarScraper, urls=fun.offerUrls)
表)。>
Scrapy 日志如下:
scr
如您所见,不幸的是,我抓取/抓取了 0 个页面,如下行所示:
fun
我尝试打印 offerUrls
- 在
2021-02-13 14:39:29 [scrapy.crawler] INFO: Overridden settings:
{}
2021-02-13 14:39:29 [scrapy.extensions.telnet] INFO: Telnet Password: 85e67e930e79de1f
2021-02-13 14:39:29 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2021-02-13 14:39:29 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2021-02-13 14:39:29 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2021-02-13 14:39:29 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2021-02-13 14:39:29 [scrapy.core.engine] INFO: Spider opened
2021-02-13 14:39:29 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2021-02-13 14:39:29 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
函数内,它正确显示为表格 - 我认为这是将参数传递给刮刀的东西,但它似乎很不错。
我也尝试更改 2021-02-13 14:39:29 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
- 并将其转换为 self.start_urls
但也没有任何反应。
我的想法:
__init__
没有正确创建?return
?