scrapy错误:Spider错误处理问题

时间:2018-11-14 11:05:00

标签: python web-scraping scrapy web-crawler scrapy-spider

我是新手,在运行代码时遇到此错误。

我的代码

import urlparse

from scrapy.http import Request
from scrapy.spiders import BaseSpider
class legco(BaseSpider):
name = "sec_gov"

allowed_domains = ["www.sec.gov", "search.usa.gov", "secsearch.sec.gov"]
start_urls = ["https://www.sec.gov/cgi-bin/browse-edgar?company=&match=&CIK=&filenum=&State=&Country=&SIC=2834&owner=exclude&Find=Find+Companies&action=getcompany"]

#extract home page search results
def parse(self, response):
for link in response.xpath('//div[@id="seriesDiv"]//table[@class="tableFile2"]/a/@href').extract():
    req = Request(url = link, callback = self.parse_page)
    print link
    yield req

#extract second link search results
def parse_second(self, response):
for link in response.xpath('//div[@id="seriesDiv"]//table[@class="tableFile2"]//*[@id="documentsbutton"]/a/@href').extract():
    req = Request(url = link, callback = self.parse_page)
    print link
    yield req

一旦我尝试运行此代码: scrapy crawl sec_gov 遇到此错误。

2018-11-14 15:37:26 [scrapy.core.engine] INFO: Spider opened
2018-11-14 15:37:26 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-11-14 15:37:26 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-11-14 15:37:27 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.sec.gov/cgi-bin/browse-edgar?company=&match=&CIK=&filenum=&State=&Country=&SIC=2834&owner=exclude&Find=Find+Companies&action=getcompany> (referer: None)
2018-11-14 15:37:27 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.sec.gov/cgi-bin/browse-edgar?company=&match=&CIK=&filenum=&State=&Country=&SIC=2834&owner=exclude&Find=Find+Companies&action=getcompany> (referer: None)
Traceback (most recent call last):
File "/home/surukam/.local/lib/python2.7/site-packages/twisted/internet/defer.py", line 654, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/surukam/.local/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 90, in parse
raise NotImplementedError('{}.parse callback is not defined'.format(self.__class__.__name__))
NotImplementedError: legco.parse callback is not defined
2018-11-14 15:37:27 [scrapy.core.engine] INFO: Closing spider (finished)

有人可以帮助我吗?预先感谢

1 个答案:

答案 0 :(得分:0)

您的代码根本不应该运行。为了使您的脚本运行,有几项需要修复。您在哪里找到此self.parse_page,它在脚本中做什么?您的脚本缩进严重。我已经修复了脚本,该脚本现在可以跟踪其着陆页上的每个url,这些URL均与内页中文档的相关链接有关。尝试获取内容。

import scrapy

class legco(scrapy.Spider):
    name = "sec_gov"

    start_urls = ["https://www.sec.gov/cgi-bin/browse-edgar?company=&match=&CIK=&filenum=&State=&Country=&SIC=2834&owner=exclude&Find=Find+Companies&action=getcompany"]

    def parse(self, response):
        for link in response.xpath('//table[@summary="Results"]//td[@scope="row"]/a/@href').extract():
            absoluteLink = response.urljoin(link)
            yield scrapy.Request(url = absoluteLink, callback = self.parse_page)

    def parse_page(self, response):
        for links in response.xpath('//table[@summary="Results"]//a[@id="documentsbutton"]/@href').extract():
            targetLink = response.urljoin(links)
            yield {"links":targetLink}