Scrapy,仅遵循内部URL,但提取所有找到的链接

时间:2015-01-15 13:22:03

标签: python scrapy web-crawler scrape scrapy-spider

我希望使用Scrapy从给定网站获取所有外部链接。使用以下代码,蜘蛛也会抓取外部链接:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
from myproject.items import someItem

class someSpider(CrawlSpider):
  name = 'crawltest'
  allowed_domains = ['someurl.com']
  start_urls = ['http://www.someurl.com/']

  rules = (Rule (LinkExtractor(), callback="parse_obj", follow=True),
  )

  def parse_obj(self,response):
    item = someItem()
    item['url'] = response.url
    return item

我错过了什么?没有" allow_domains"防止外部链接被抓取?如果我设置" allow_domains"对于LinkExtractor,它不会提取外部链接。只是为了澄清:我不想抓取内部链接但是提取外部链接。任何帮助appriciated!

4 个答案:

答案 0 :(得分:11)

在解析每个页面后,您还可以使用链接提取器来提取所有链接。

链接提取器将为您过滤链接。在此示例中,链接提取器将拒绝允许域中的链接,因此它只能在外部链接。

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LxmlLinkExtractor
from myproject.items import someItem

class someSpider(CrawlSpider):
  name = 'crawltest'
  allowed_domains = ['someurl.com']
  start_urls = ['http://www.someurl.com/']

  rules = (Rule(LxmlLinkExtractor(allow=()), callback='parse_obj', follow=True),)


  def parse_obj(self,response):
    for link in LxmlLinkExtractor(allow=(),deny = self.allowed_domains).extract_links(response):
        item = someItem()
        item['url'] = link.url

答案 1 :(得分:4)

基于12Ryan12的答案的更新代码,

from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor
from scrapy.item import Item, Field

class MyItem(Item):
    url= Field()


class someSpider(CrawlSpider):
    name = 'crawltest'
    allowed_domains = ['someurl.com']
    start_urls = ['http://www.someurl.com/']
    rules = (Rule(LxmlLinkExtractor(allow=()), callback='parse_obj', follow=True),)

    def parse_obj(self,response):
        item = MyItem()
        item['url'] = []
        for link in LxmlLinkExtractor(allow=(),deny = self.allowed_domains).extract_links(response):
            item['url'].append(link.url)
        return item

答案 2 :(得分:3)

解决方案是在SgmlLinkExtractor中使用process_link函数 这里的文档http://doc.scrapy.org/en/latest/topics/link-extractors.html

class testSpider(CrawlSpider):
    name = "test"
    bot_name = 'test'
    allowed_domains = ["news.google.com"]
    start_urls = ["https://news.google.com/"]
    rules = (
    Rule(SgmlLinkExtractor(allow_domains=()), callback='parse_items',process_links="filter_links",follow= True) ,
     )

    def filter_links(self, links):
        for link in links:
            if self.allowed_domains[0] not in link.url:
                print link.url

        return links

    def parse_items(self, response):
        ### ...

答案 3 :(得分:-5)

pip install -U scrapy 解决了我的问题)