使用Scrapy获取整个网站中的所有网址

时间:2017-10-11 13:42:13

标签: python web-scraping scrapy web-crawler

人们! 我正在尝试获取整个网站中的所有内部URL以用于搜索引擎优化目的,我最近发现了Scrapy来帮助我完成这项任务。但我的代码总是返回错误:

2017-10-11 10:32:00 [scrapy.core.engine] INFO: Spider opened
2017-10-11 10:32:00 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min
)
2017-10-11 10:32:00 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-10-11 10:32:01 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.**test**.com/> from
 <GET http://www.**test**.com/robots.txt>
2017-10-11 10:32:02 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.**test**.com/> (referer: None)
2017-10-11 10:32:03 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.**test**.com/> from
 <GET http://www.**test**.com>
2017-10-11 10:32:03 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.**test**.com/> (referer: None)
2017-10-11 10:32:03 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.**test**.com/> (referer: None)
Traceback (most recent call last):
  File "c:\python27\lib\site-packages\twisted\internet\defer.py", line 653, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "c:\python27\lib\site-packages\scrapy\spiders\__init__.py", line 90, in parse
    raise NotImplementedError
NotImplementedError

我更改了原始网址。

这是我正在运行的代码

# -*- coding: utf-8 -*-
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor


class TestSpider(scrapy.Spider):
    name = "test"
    allowed_domains = ["http://www.test.com"]
    start_urls = ["http://www.test.com"]

    rules = [Rule (LinkExtractor(allow=['.*']))]

谢谢!

编辑:

这对我有用:

rules = (
    Rule(LinkExtractor(), callback='parse_item', follow=True),
)

def parse_item(self, response):
    filename = response.url
    arquivo = open("file.txt", "a")
    string = str(filename)
    arquivo.write(string+ '\n') 
    arquivo.close

= d

1 个答案:

答案 0 :(得分:1)

您获得的错误是由于您的蜘蛛中没有定义parse方法,如果您将蜘蛛基于scrapy.Spider类,则必须使用此方法。

为了您的目的(即抓取整个网站),最好将您的蜘蛛基于scrapy.CrawlSpider课程。此外,在Rule中,您必须将callback属性定义为解析您访问的每个页面的方法。最后一项化妆品更改,在LinkExtractor中,如果您想访问每个页面,则可以省略allow,因为其默认值为空元组,这意味着它将匹配找到的所有链接。

查询具体代码CrawlSpider example