Scrapy仅输出一个开放式支架

时间:2015-06-18 21:30:27

标签: python url web-crawler scrapy scrape

我正试图在数学/科学/经济学页面下删除所有可汗学院页面的标题和URL。但是,目前它只输出一个开括号,在此之前它只会刮掉起始URL。

from openbar_index.items import OpenBarIndexItem
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor


class OpenBarSpider(CrawlSpider):
    """
    scrapes website URLs from educational websites and commits urls/webpage names/text to a document
    """

    name = 'openbar'
    allowed_domains = 'khanacademy.org'
    start_urls = [

        "https://www.khanacademy.org"

    ]

     rules = [

            Rule(SgmlLinkExtractor(allow = ['/math/']), callback='parse_item', follow = True),
             Rule(SgmlLinkExtractor(allow = ['/science/']), callback='parse_item', follow=True),
             Rule(SgmlLinkExtractor(allow = ['/economics-finance-domain/']), callback='parse_item', follow=True)
    ]

    def parse_item(self, response):

         item = OpenBarIndexItem()
         url = response.url
         item['url'] = url
         item['title'] = response.xpath('/html/head/title/text()').extract()
         yield item

有没有人知道为什么会发生这种情况或提示如何修复它?

1 个答案:

答案 0 :(得分:0)

问题是对allowed_domains的分配。根据{{​​3}},这不能是string,而是list。使用字符串时,scrapy会将可能的结果过滤为非现场请求,因为没有有效的域。

所以在下一行添加方括号应该修复它

    allowed_domains = ['khanacademy.org']