在scrapy网络爬虫中获取错误

时间:2013-04-22 05:12:50

标签: python web-scraping scrapy web-crawler scrapy-spider

您好我尝试在我的代码中实现此功能。但是我收到以下错误:exceptions.NameError: global name 'Request' is not defined

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector 
from bs4 import BeautifulSoup

class spider_aicte(BaseSpider):
    name = "Indian_Colleges"
    allowed_domains = ["http://www.domain.org"]
    start_urls = [
        "http://www.domain.org/appwebsite.html",
        ]

    def parse(self, response):
        filename = response.url.split("/")[-2]
        soup = BeautifulSoup(response.body)
        for link in soup.find_all('a'):
            download_link = link.get('href')
            if '.pdf' in download_link:
                pdf_link = "http://www.domain.org" + download_link
                print pdf_link
                class FileSpider(BaseSpider):
                    name = "fspider"
                    allowed_domains = ["www.domain.org"]
                    start_urls = [
                            pdf_link
                            ]
        for url in pdf_link:
            yield Request(url, callback=self.save_pdf)

    def save_pdf(self, response):
         path = self.get_path(response.url)
         with open(path, "wb") as f:
            f.write(response.body)

1 个答案:

答案 0 :(得分:9)

您应该在使用前导入Request

from scrapy.http import Request

或者,还有一个“快捷方式”导入:

from scrapy import Request

或者,如果您有import scrapy行,请使用scrapy.Request