我目前正在用Scrapy写空缺搜寻器,以分析大约3M的空缺项目。 现在,当蜘蛛工作并且成功地将项目抓取并将其存储到postgreesql时,我就位了,但事实是它的执行速度非常慢。 1个小时我只存储了12,000个空缺,因此我离3M的空缺真的很远。 最终,我每天将需要刮擦和更新一次数据,而以目前的性能,我将需要超过一天的时间来解析所有数据。
我是数据收集的新手,所以我可能会做一些基本的错误,如果有人可以帮助我,我将非常感激。
我的蜘蛛的代码:
import scrapy
import urllib.request
from lxml import html
from ..items import JobItem
class AdzunaSpider(scrapy.Spider):
name = "adzuna"
start_urls = [
'https://www.adzuna.ru/search?loc=136073&pp=10'
]
def parse(self, response):
job_items = JobItem()
items = response.xpath("//div[@class='sr']/div[@class='a']")
def get_redirect(url):
response = urllib.request.urlopen(url)
response_code = response.read()
result = str(response_code, 'utf-8')
root = html.fromstring(result)
final_url = root.xpath('//p/a/@href')[0]
final_final_url = final_url.split('?utm', 1)[0]
return final_final_url
for item in items:
id = None
data_aid = item.xpath(".//@data-aid").get()
redirect = item.xpath(".//h2/a/@href").get()
url = get_redirect(redirect)
url_header = item.xpath(".//h2/a/strong/text()").get()
if item.xpath(".//p[@class='as']/@data-company-name").get() == None:
company = item.xpath(".//p[@class='as']/text()").get()
else:
company = item.xpath(".//p[@class='as']/@data-company-name").get()
loc = item.xpath(".//p/span[@class='loc']/text()").get()
text = item.xpath(".//p[@class='at']/span[@class='at_tr']/text()").get()
salary = item.xpath(".//p[@class='at']/span[@class='at_sl']/text()").get()
job_items['id'] = id
job_items['data_aid'] = data_aid
job_items['url'] = url
job_items['url_header'] = url_header
job_items['company'] = company
job_items['loc'] = loc
job_items['text'] = text
job_items['salary'] = salary
yield job_items
next_page = response.css("table.pg td:last-child ::attr('href')").get()
if next_page is not None:
yield response.follow(next_page, self.parse)
答案 0 :(得分:1)
meta
中使用Request
CONCURRENT_ITEMS=100
设置为较高值会降低性能AUTOTHROTTLE_ENABLED=False
TELNETCONSOLE_ENABLED=False