在Scrapy中将争论传递给allowed_domains

时间:2017-04-11 02:10:28

标签: python scrapy scrapy-spider

我正在创建一个抓取工具,它接受用户输入并抓取网站上的所有链接。但是,我需要限制从该域的链接的链接的爬行和提取,没有外部域。我把它带到了我需要的地方,就爬行器而言。我的问题是,对于我的allow_domains函数,我似乎无法传递通过命令输入的scrapy选项。贝娄是第一个运行的脚本:

# First Script
import os

def userInput():
    user_input = raw_input("Please enter URL. Please do not include http://: ")
    os.system("scrapy runspider -a user_input='http://" + user_input + "' crawler_prod.py")

userInput()

它运行的脚本是爬虫,爬虫将抓取给定的域。这是抓取代码:

#Crawler
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from scrapy.spider import BaseSpider
from scrapy import Request
from scrapy.http import Request

class InputSpider(CrawlSpider):
        name = "Input"
        #allowed_domains = ["example.com"]

        def allowed_domains(self):
            self.allowed_domains = user_input

        def start_requests(self):
            yield Request(url=self.user_input)

        rules = [
        Rule(SgmlLinkExtractor(allow=()), follow=True, callback='parse_item')
        ]

        def parse_item(self, response):
            x = HtmlXPathSelector(response)
            filename = "output.txt"
            open(filename, 'ab').write(response.url + "\n")

我已经尝试过终止命令发送的请求,但崩溃了爬虫。我现在如何拥有它也会使爬虫崩溃。我也试过放入allowed_domains=[user_input]并向我报告它没有定义。我正在使用Scrapy的Request库来运行它,没有运气。有没有更好的方法来限制在给定域之外的爬行?

编辑:

这是我的新代码:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from scrapy.spiders import BaseSpider
from scrapy import Request
from scrapy.http import Request
from scrapy.utils.httpobj import urlparse
#from run_first import *

class InputSpider(CrawlSpider):
        name = "Input"
        #allowed_domains = ["example.com"]

        #def allowed_domains(self):
            #self.allowed_domains = user_input

        #def start_requests(self):
            #yield Request(url=self.user_input)

        def __init__(self, *args, **kwargs):
            inputs = kwargs.get('urls', '').split(',') or []
            self.allowed_domains = [urlparse(d).netloc for d in inputs]
            # self.start_urls = [urlparse(c).netloc for c in inputs] # For start_urls

        rules = [
        Rule(SgmlLinkExtractor(allow=()), follow=True, callback='parse_item')
        ]

        def parse_item(self, response):
            x = HtmlXPathSelector(response)
            filename = "output.txt"
            open(filename, 'ab').write(response.url + "\n")

这是新代码的输出日志

2017-04-18 18:18:01 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot)
2017-04-18 18:18:01 [scrapy] INFO: Optional features available: ssl, http11, boto
2017-04-18 18:18:01 [scrapy] INFO: Overridden settings: {'LOG_FILE': 'output.log'}
2017-04-18 18:18:43 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot)
2017-04-18 18:18:43 [scrapy] INFO: Optional features available: ssl, http11, boto
2017-04-18 18:18:43 [scrapy] INFO: Overridden settings: {'LOG_FILE': 'output.log'}
2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:1: ScrapyDeprecationWarning: Module `scrapy.contrib.spiders` is deprecated, use `scrapy.spiders` instead
  from scrapy.contrib.spiders import CrawlSpider, Rule

2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:2: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors` is deprecated, use `scrapy.linkextractors` instead
  from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:2: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors.sgml` is deprecated, use `scrapy.linkextractors.sgml` instead
  from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:27: ScrapyDeprecationWarning: SgmlLinkExtractor is deprecated and will be removed in future releases. Please use scrapy.linkextractors.LinkExtractor
  Rule(SgmlLinkExtractor(allow=()), follow=True, callback='parse_item')

2017-04-18 18:18:43 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2017-04-18 18:18:43 [boto] DEBUG: Retrieving credentials from metadata server.
2017-04-18 18:18:44 [boto] ERROR: Caught exception reading instance data
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/boto/utils.py", line 210, in retry_url
    r = opener.open(req, timeout=timeout)
  File "/usr/lib/python2.7/urllib2.py", line 429, in open
    response = self._open(req, data)
  File "/usr/lib/python2.7/urllib2.py", line 447, in _open
    '_open', req)
  File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 1228, in http_open
    return self.do_open(httplib.HTTPConnection, req)
  File "/usr/lib/python2.7/urllib2.py", line 1198, in do_open
    raise URLError(err)
URLError: <urlopen error timed out>
2017-04-18 18:18:44 [boto] ERROR: Unable to read instance data, giving up
2017-04-18 18:18:44 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2017-04-18 18:18:44 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2017-04-18 18:18:44 [scrapy] INFO: Enabled item pipelines: 
2017-04-18 18:18:44 [scrapy] INFO: Spider opened
2017-04-18 18:18:44 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-04-18 18:18:44 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-04-18 18:18:44 [scrapy] ERROR: Error while obtaining start requests
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/scrapy/core/engine.py", line 110, in _next_request
    request = next(slot.start_requests)
  File "/usr/lib/python2.7/dist-packages/scrapy/spiders/__init__.py", line 70, in start_requests
    yield self.make_requests_from_url(url)
  File "/usr/lib/python2.7/dist-packages/scrapy/spiders/__init__.py", line 73, in make_requests_from_url
    return Request(url, dont_filter=True)
  File "/usr/lib/python2.7/dist-packages/scrapy/http/request/__init__.py", line 24, in __init__
    self._set_url(url)
  File "/usr/lib/python2.7/dist-packages/scrapy/http/request/__init__.py", line 59, in _set_url
    raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: 
2017-04-18 18:18:44 [scrapy] INFO: Closing spider (finished)
2017-04-18 18:18:44 [scrapy] INFO: Dumping Scrapy stats:
{'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 4, 18, 22, 18, 44, 794155),
 'log_count/DEBUG': 2,
 'log_count/ERROR': 3,
 'log_count/INFO': 7,
 'start_time': datetime.datetime(2017, 4, 18, 22, 18, 44, 790331)}
2017-04-18 18:18:44 [scrapy] INFO: Spider closed (finished)

编辑:

通过查看答案并重新阅读文档,我能够找到问题的答案。以下是我添加到爬虫脚本中以使其工作的内容。

def __init__(self, url=None, *args, **kwargs):
    super(InputSpider, self).__init__(*args, **kwargs)
    self.allowed_domains = [url]
    self.start_urls = ["http://" + url]

1 个答案:

答案 0 :(得分:2)

你在这里缺少一些东西。

  1. 来自start_urls的第一个请求未被过滤。
  2. 运行开始后,您无法覆盖allowed_domains
  3. 要处理这些问题,您需要编写自己的offiste中间件,或者至少使用您需要的更改来修改现有的中间件。

    一旦蜘蛛打开OffsiteMiddleware处理allowed_domains会将allowed_domains值转换为正则表达式字符串,然后该参数再也不会被使用。< / p>

    向您middlewares.py添加以下内容:

    from scrapy.spidermiddlewares.offsite import OffsiteMiddleware
    from scrapy.utils.httpobj import urlparse_cached
    class MyOffsiteMiddleware(OffsiteMiddleware):
    
        def should_follow(self, request, spider):
            """Return bool whether to follow a request"""
            # hostname can be None for wrong urls (like javascript links)
            host = urlparse_cached(request).hostname or ''
            if host in spider.allowed_domains:
                return True
            return False
    

    setting.py

    中激活它
    SPIDER_MIDDLEWARES = {
        # enable our middleware
        'myspider.middlewares.MyOffsiteMiddleware': 500,
        # disable old middleware
        'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': None, 
    
    }
    

    现在你的蜘蛛应该跟随allowed_domains中的任何内容,即使你在中途修改它。

    编辑:根据您的情况:

    from scrapy.utils.httpobj import urlparse
    class MySpider(Spider):
        def __init__(self, *args, **kwargs):
            input = kwargs.get('urls', '').split(',') or []
            self.allowed_domains = [urlparse(d).netloc for d in input]
    

    现在你可以运行:

    scrapy crawl myspider -a "urls=foo.com,bar.com"