我试图抓取this page,其中包括根据chrome跟踪html
<p class="title">
Orange Paired
</p>
这是我的蜘蛛:
import scrapy
from scrapy_splash import SplashRequest
class MySpider(scrapy.Spider):
name = "splash"
allowed_domains = ["phillips.com"]
start_urls = ["https://www.phillips.com/detail/BRIDGET-RILEY/UK010417/19"]
def start_requests(self):
for url in self.start_urls:
yield SplashRequest(
url,
self.parse,
endpoint='render.json',
args={'har': 1, 'html': 1}
)
def parse(self, response):
print("1. PARSED", response.real_url, response.url)
print("2. ",response.css("title").extract())
print("3. ",response.data["har"]["log"]["pages"])
print("4. ",response.headers.get('Content-Type'))
print("5. ",response.xpath('//p[@class="title"]/text()').extract())
这是scrapy runspider spiders/splash_spider.py
2017-08-31 09:48:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
1. PARSED http://localhost:8050/render.json https://www.phillips.com/detail/BRIDGET-RILEY/UK010417/19
2. ['<title>PHILLIPS : Bridget Riley, Orange Paired</title>', '<title>Page 1</title>']
3. [{'title': 'PHILLIPS : Bridget Riley, Orange Paired', 'pageTimings': {'onContentLoad': 3832, '_onStarted': 1, '_onIframesRendered': 4667, 'onLoad': 4664, '_onPrepareStart': 4664}, 'id': '1', 'startedDateTime': '2017-08-31T07:48:18.986240Z'}]
4. b'text/html; charset=utf-8'
5. []
2017-08-31 09:48:23 [scrapy.core.engine] INFO: Closing spider (finished)
为什么我得到5的空输出?
答案 0 :(得分:1)
在这种情况下,良好的起点是查看Splash文档的FAQ部分。事实证明,在你的情况下,你需要disable Private mode用于Splash,或者通过Docker的--disable-private-mode
启动选项,或者在你的LUA脚本中设置splash.private_mode_enabled = false
。
禁用私人模式后,页面会正确呈现。