Scrapy绕过start_urls

时间:2016-05-08 18:35:31

标签: python python-2.7 scrapy

当运行这个蜘蛛时,scrapy告诉我正在抓取的页面是'http://192.168.59.103:8050/render.html'(在start_requests" meta"参数中定义的splash render端点)。当然,这是我想要将start_urls 传递给的网址,而不是我想要删除的网址。我猜测问题在于我如何将url从start_urls传递到start_requests进行解析,但我无法确定问题所在。

此处还有settings file

提前感谢。

# -*- coding: utf-8 -*-
#scrapy crawl ia_checkr -o IA_OUT.csv -t csv

import scrapy
from scrapy.http import Request
from scrapy.selector import Selector
from scrapy.spiders import CrawlSpider, Rule

from ia_check.items import Check_Item

from datetime import datetime
import ia_check

class CheckSpider(CrawlSpider):
    name = "ia_check"
    handle_httpstatus_list = [404,429,503]

    start_urls = [
    "http://www.amazon.com/Easy-Smart-Touch-Action-Games/dp/B00PRH5UJW",
    "http://www.amazon.com/mobile9-LAZYtube-MP4-Video-Downloader/dp/B00KFITEV8",
    "http://www.amazon.com/Forgress-Storyteller-Audiobook-Pro/dp/B00J0T73XO",
    "http://www.amazon.com/cgt-MP3-Downloader/dp/B00O65Z0RS",
    "http://www.amazon.com/DoomsDayBunny-Squelch-Free-Music-Downloader/dp/B00N3DDDRI"
    ]

    def start_requests(self):
        for url in self.start_urls:
            yield scrapy.Request(url, self.parse, meta={
                'splash': {
                    'endpoint': 'render.html',
                    'args': {'wait': 1}
                }
            })

    def parse(self, response):
        ResultsDict = Check_Item()
        Select = Selector(response).xpath

        ResultsDict['title'] = Select(".//*[@class='h1']/text()|.//*[@id='btAsinTitle']/text()").extract()
        ResultsDict['application_url'] = response.url
        return ResultsDict

1 个答案:

答案 0 :(得分:1)

我建议您升级到最新的scrapy-splash plugin(以前称之为scrapyjs

有一个方便的scrapy_splash.SplashRequest实用程序可以修复"原始远程主机的URL,而不是Splash端点。

这是一个类似于你的蜘蛛示例:

import scrapy
from scrapy_splash import SplashRequest


class CheckSpider(scrapy.Spider):
    name = "scrapy-splash-example"
    handle_httpstatus_list = [404,429,503]

    start_urls = [
        "http://rads.stackoverflow.com/amzn/click/B00PRH5UJW",
        "http://rads.stackoverflow.com/amzn/click/B00KFITEV8",
        "http://rads.stackoverflow.com/amzn/click/B00J0T73XO",
        "http://rads.stackoverflow.com/amzn/click/B00O65Z0RS",
        "http://rads.stackoverflow.com/amzn/click/B00N3DDDRI"
    ]

    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url,
                                callback=self.parse,
                                args={
                                    'wait': 1,
                                })

    def parse(self, response):
        self.logger.debug("Response: status=%d; url=%s" % (response.status, response.url))

settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for splashtst project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'splashtst'

SPIDER_MODULES = ['splashtst.spiders']
NEWSPIDER_MODULE = 'splashtst.spiders'

# Splash stuff
SPLASH_URL = 'http://localhost:8050'
DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}

SPIDER_MIDDLEWARES = {
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

检查您使用它的控制台日志,特别是URL:

$ scrapy crawl scrapy-splash-example
2016-05-09 12:46:05 [scrapy] INFO: Scrapy 1.0.6 started (bot: splashtst)
2016-05-09 12:46:05 [scrapy] INFO: Optional features available: ssl, http11
2016-05-09 12:46:05 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'splashtst.spiders', 'SPIDER_MODULES': ['splashtst.spiders'], 'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter', 'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage', 'BOT_NAME': 'splashtst'}
2016-05-09 12:46:05 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2016-05-09 12:46:05 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, RedirectMiddleware, CookiesMiddleware, SplashCookiesMiddleware, SplashMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-05-09 12:46:05 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, SplashDeduplicateArgsMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-05-09 12:46:05 [scrapy] INFO: Enabled item pipelines: 
2016-05-09 12:46:05 [scrapy] INFO: Spider opened
2016-05-09 12:46:05 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-05-09 12:46:05 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-05-09 12:46:07 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00O65Z0RS via http://localhost:8050/render.html> (referer: None)
2016-05-09 12:46:07 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00O65Z0RS
2016-05-09 12:46:12 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00KFITEV8 via http://localhost:8050/render.html> (referer: None)
2016-05-09 12:46:12 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00KFITEV8
2016-05-09 12:46:12 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00PRH5UJW via http://localhost:8050/render.html> (referer: None)
2016-05-09 12:46:13 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00PRH5UJW
2016-05-09 12:46:16 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00N3DDDRI via http://localhost:8050/render.html> (referer: None)
2016-05-09 12:46:17 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00N3DDDRI
2016-05-09 12:46:18 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00J0T73XO via http://localhost:8050/render.html> (referer: None)
2016-05-09 12:46:18 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00J0T73XO
2016-05-09 12:46:18 [scrapy] INFO: Closing spider (finished)
2016-05-09 12:46:18 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 2690,
 'downloader/request_count': 5,
 'downloader/request_method_count/POST': 5,
 'downloader/response_bytes': 1794947,
 'downloader/response_count': 5,
 'downloader/response_status_count/200': 5,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 5, 9, 10, 46, 18, 631501),
 'log_count/DEBUG': 11,
 'log_count/INFO': 7,
 'response_received_count': 5,
 'scheduler/dequeued': 10,
 'scheduler/dequeued/memory': 10,
 'scheduler/enqueued': 10,
 'scheduler/enqueued/memory': 10,
 'splash/render.html/request_count': 5,
 'splash/render.html/response_count/200': 5,
 'start_time': datetime.datetime(2016, 5, 9, 10, 46, 5, 368693)}
2016-05-09 12:46:18 [scrapy] INFO: Spider closed (finished)