为什么我的网络抓取方法找不到所有链接?

时间:2015-04-11 15:36:41

标签: ruby web-crawler nokogiri

我正在尝试创建一个简单的网络爬虫,所以我写了这个:

(方法get_links采用我们将寻求的父链接)

require 'nokogiri'
require 'open-uri'

def get_links(link)
    link = "http://#{link}"
    doc = Nokogiri::HTML(open(link))
    links = doc.css('a')
    hrefs = links.map {|link| link.attribute('href').to_s}.uniq.delete_if {|href| href.empty?}
    array = hrefs.select {|i| i[0] == "/"}
    host = URI.parse(link).host
    links_list = array.map {|a| "#{host}#{a}"}
end

(方法search_links,从get_links方法获取数组并在此数组中搜索)

def search_links(urls)
    urls = get_links(link)
    urls.uniq.each do |url|
        begin
            links = get_links(url)
            compare = urls & links
            urls << links - compare
            urls.flatten!
        rescue OpenURI::HTTPError
            warn "Skipping invalid link #{url}"
        end
    end
    return urls
end

此方法可以从网站查找大部分链接,但不是全部。

我做错了什么?我应该使用哪种算法?

2 个答案:

答案 0 :(得分:0)

有关您的代码的一些评论:

  def get_links(link)
    link = "http://#{link}"
    # You're assuming the protocol is always http.
    # This isn't the only protocol on used on the web.

    doc = Nokogiri::HTML(open(link))

    links = doc.css('a')
    hrefs = links.map {|link| link.attribute('href').to_s}.uniq.delete_if {|href| href.empty?}
    # You can write these two lines more compact as
    #   hrefs = doc.xpath('//a/@href').map(&:to_s).uniq.delete_if(&:empty?)

    array = hrefs.select {|i| i[0] == "/"}
    # I guess you want to handle URLs that are relative to the host.
    # However, URLs relative to the protocol (starting with '//')
    # will also be selected by this condition.

    host = URI.parse(link).host
    links_list = array.map {|a| "#{host}#{a}"}
    # The value assigned to links_list will implicitly be returned.
    # (The assignment itself is futile, the right-hand-part alone would
    # suffice.) Because this builds on `array` all absolute URLs will be
    # missing from the return value.
  end

的说明
hrefs = doc.xpath('//a/@href').map(&:to_s).uniq.delete_if(&:empty?)
  • .xpath('//a/@href')使用XPath的属性语法直接转到href元素的a属性
  • .map(&:to_s).map { |item| item.to_s }
  • 的缩写表示法
  • .delete_if(&:empty?)使用相同的缩写符号

关于第二个功能的评论:

def search_links(urls)
    urls = get_links(link)
    urls.uniq.each do |url|
      begin
        links = get_links(url)


        compare = urls & links
        urls << links - compare
        urls.flatten!
        # How about using a Set instead of an Array and
        # thus have the collection provide uniqueness of
        # its items, so that you don't have to?


      rescue OpenURI::HTTPError
         warn "Skipping invalid link #{url}"
      end
    end
    return urls
    # This function isn't recursive, it just calls `get_links` on two
    # 'levels'. Thus you search only two levels deep and return findings
    # from the first and second level combined. (Without the "zero'th"
    # level - the URL passed into `search_links`. Unless off course if it
    # also occured on the first or second level.)
    #
    # Is this what you intended?
  end

答案 1 :(得分:0)

你应该使用机械化:

require 'mechanize'
agent = Mechanize.new
page = agent.get url
links = page.search('a[href]').map{|a| page.uri.merge(a[:href]).to_s}
# if you want to remove links with a different host (hyperlinks?)
links.reject!{|l| URI.parse(l).host != page.uri.host}

否则,您将无法正确地将相对网址转换为绝对网址。