您好,并提前感谢您提供的帮助或指导。这是我的刮刀:
import scrapy
class RakutenSpider(scrapy.Spider):
name = "rak"
allowed_domains = ["rakuten.com"]
start_urls = ['https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore']
def parse(self, response):
for sel in response.xpath('//div[@class="page-bottom"]/div'):
yield {
'titles': sel.xpath("//div[@class='slider-prod-title']").extract_first(),
'prices': sel.xpath("//span[@class='price-bold']").extract_first(),
'images': sel.xpath("//div[@class='deal-img']/img").extract_first()
}
这是我的settings.py
的一部分USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'
CONCURRENT_REQUESTS = 1
DOWNLOAD_DELAY = 5
# Obey robots.txt rules
ROBOTSTXT_OBEY = 'False'
这是日志的一部分:
DEBUG: Crawled (403) <GET https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore> (referer: None)
我尝试了几乎所有在s / o
中找到的解决方案日志文件:这是安装Firefox驱动程序后的新日志。现在我得到错误:下载https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore>
时出错2017-11-17 00:38:45 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: scrapybot)
2017-11-17 00:38:45 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'deals.spiders', 'CONCURRENT_REQUESTS': 1, 'SPIDER_MODULES': ['deals.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36', 'TELNETCONSOLE_ENABLED': False, 'DOWNLOAD_DELAY': 5}
2017-11-17 00:38:45 [py.warnings] WARNING: :0: UserWarning: You do not have a working installation of the service_identity module: 'No module named cryptography.x509'. Please install it from <https://pypi.python.org/pypi/service_identity> and make sure all of its dependencies are satisfied. Without the service_identity module and a recent enough pyOpenSSL to support it, Twisted can perform only rudimentary TLS client hostname verification. Many valid certificate/hostname mappings may be rejected.
2017-11-17 00:38:45 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.corestats.CoreStats']
2017-11-17 00:38:45 [scrapy.middleware] INFO: Enabled downloader middlewares:
['deals.middlewares.JSMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-11-17 00:38:45 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-11-17 00:38:45 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-11-17 00:38:45 [scrapy.core.engine] INFO: Spider opened
2017-11-17 00:38:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-11-17 00:38:45 [scrapy.core.scraper] ERROR: Error downloading <GET https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore>
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/middleware.py", line 37, in process_request
response = yield method(request=request, spider=spider)
File "/home/seealldeals/tmp/scrapy/deals/deals/middlewares.py", line 63, in process_request
driver = webdriver.Firefox()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/webdriver.py", line 144, in __init__
self.service.start()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/common/service.py", line 74, in start
stdout=self.log_file, stderr=self.log_file)
File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 8] Exec format error
2017-11-17 00:38:45 [scrapy.core.engine] INFO: Closing spider (finished)
2017-11-17 00:38:45 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 1,
'downloader/exception_type_count/exceptions.OSError': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 11, 17, 5, 38, 45, 328366),
'log_count/ERROR': 1,
'log_count/INFO': 7,
'log_count/WARNING': 1,
'memusage/max': 33509376,
'memusage/startup': 33509376,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2017, 11, 17, 5, 38, 45, 112667)}
2017-11-17 00:38:45 [scrapy.core.engine] INFO: Spider closed (finished)
答案 0 :(得分:0)
您的设置存在问题,应该是:
ROBOTSTXT_OBEY = False
ROBOTSTXT_OBEY
变量需要一个布尔值,你用字符串设置它。您可以检查日志,它首先访问robots.txt
请求。
答案 1 :(得分:0)
rakuten.com
已与Google Analytics
集成,具有反蜘蛛功能。rakuten.com
的{{1}},您将被阻止访问该网站,并显示analytics.js
错误代码。使用Javascript渲染技术
解决方案1 :(将scrapy与scrapy-splash整合)
从pypi安装scrapy-splash:
403
运行scrapy-splash容器:
pip install scrapy-splash
在docker run -p 8050:8050 scrapinghub/splash
settings.py
将启动下载中间件附加到SPLASH_URL = 'http://192.168.59.103:8050'
settings.py
将您的蜘蛛代码更改为
DOWNLOADER_MIDDLEWARES = {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
解决方案2 :(将scrapy与selenium webdriver整合为中间件)
从pypi安装Selenium:
import scrapy
from scrapy_splash import SplashRequest
class RakutenSpider(scrapy.Spider):
name = "rak"
allowed_domains = ["rakuten.com"]
start_urls = ['https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore']
def start_requests(self):
for url in self.start_urls:
yield SplashRequest(url, self.parse, args={'wait': 0.5})
def parse(self, response):
for sel in response.xpath('//div[@class="page-bottom"]/div'):
yield {
'titles': sel.xpath("//div[@class='slider-prod-title']").extract_first(),
'prices': sel.xpath("//span[@class='price-bold']").extract_first(),
'images': sel.xpath("//div[@class='deal-img']/img").extract_first()
}
pip install selenium
浏览器,请将Firefox的Geckodriver安装到Firefox
。
PATH
浏览器,请将Chrome驱动程序安装到Chrome
。
如果您想使用PATH
浏览器,请从PhantomJS
安装phantomJS
。
Homebrew
在 brew install phantomjs
JSmiddleware
课程
middlewares.py
将 from scrapy.http import HtmlResponse
from selenium import webdriver
class JSMiddleware(object):
def process_request(self, request, spider):
driver = webdriver.Firefox()
driver.get(request.url)
body = driver.page_source
return HtmlResponse(driver.current_url, body=body, encoding='utf-8', request=request)
附加到selenium download middleware
settings.py
使用原始蜘蛛的代码
DOWNLOADER_MIDDLEWARES = {
'youproject.middlewares.JSMiddleware': 200
}
import scrapy
class RakutenSpider(scrapy.Spider):
name = "rak"
allowed_domains = ["rakuten.com"]
start_urls = ['https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore']
def parse(self, response):
for sel in response.xpath('//div[@class="page-bottom"]/div'):
yield {
'titles': sel.xpath("//div[@class='slider-prod-title']").extract_first(),
'prices': sel.xpath("//span[@class='price-bold']").extract_first(),
'images': sel.xpath("//div[@class='deal-img']/img").extract_first()
}
模式使用Chrome
浏览器,check this tutorial