我正在使用scrapy和splash来抓取此链接:job search
但我无法提取数据。
我的代码:
import scrapy
from scrapy_splash import SplashRequest
class ManuPySpider(scrapy.Spider):
name = 'manulife'
def start_requests(self):
yield SplashRequest(
url = 'https://manulife.taleo.net/careersection/external_global/jobsearch.ftl?lang=en&location=1038',
callback=self.parse,
)
def parse(self, response):
yield{
'demo' : response.css('div.absolute > span > a::text').extract()
}
Setting.py:
BOT_NAME = 'manulife'
SPIDER_MODULES = ['manulife.spiders']
NEWSPIDER_MODULE = 'manulife.spiders'
DOWNLOADER_MIDDLEWARES = {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware':
810,
}
SPLASH_URL = 'http://192.168.99.100:8050'
SPIDER_MIDDLEWARES = {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
我检查了我的启动是否正常运行。可能是什么问题。
由于
答案 0 :(得分:2)
当我尝试使用默认设置通过Splash控制台(在8050
端口上)渲染页面时,它不包含所需的数据(即搜索结果表为空)。但是,一旦我增加了wait
参数,就可以了。所以尝试增加参数:
yield SplashRequest(
url = 'https://manulife.taleo.net/careersection/external_global/jobsearch.ftl?lang=en&location=1038',
callback=self.parse, args={'wait': 5}
)