Sidekiq机械化覆盖实例

时间:2016-06-13 22:11:13

标签: ruby web-scraping nokogiri mechanize sidekiq

我正在使用Sidekiq和Mechanize构建一个简单的Web蜘蛛。

当我为一个域运行它时,它工作正常。当我为多个域运行它时,它会失败。我相信原因是web_page在被另一个Sidekiq工作者实例化时会被覆盖,但我不确定这是真的还是如何解决它。

# my scrape_search controller's create action searches on google.
def create
  @scrape = ScrapeSearch.build(keywords: params[:keywords], profession: params[:profession])
  agent = Mechanize.new
  scrape_search = agent.get('http://google.com/') do |page|
    search_result = page.form...
    search_result.css("h3.r").map do |link|
      result = link.at_css('a')['href'] # Narrowing down to real search results
      @domain = Domain.new(some params)
      ScrapeDomainWorker.perform_async(@domain.url, @domain.id, remaining_keywords)
    end
  end
end

我正在为每个域创建一个Sidekiq作业。我正在寻找的大多数域应该只包含几页,因此每页不需要子作业。

这是我的工作人员:

class ScrapeDomainWorker
  include Sidekiq::Worker
  ...

  def perform(domain_url, domain_id, keywords)
    @domain       = Domain.find(domain_id)
    @domain_link  = @domain.protocol + '://' + domain_url
    @keywords     = keywords

    # First we scrape the homepage and get the first links
    @domain.to_parse = ['/']  # to_parse is an array of PATHS to parse for the domain
    mechanize_path('/')
    @domain.verified << '/' # verified is an Array field containing valid domain paths
    get_paths(@web_page) # Now we should have to_scrape populated with homepage links 

    @domain.scraped = 1 # Loop counter
    while @domain.scraped < 100
      @domain.to_parse.each do |path|
        @domain.to_parse.delete(path)
        @domain.scraped += 1
        mechanize_path(path) # We create a Nokogiri HTML doc with mechanize for the valid path
        ...
        get_paths(@web_page) # Fire this to repopulate to_scrape !!!
      end
    end
    @domain.save
  end

  def mechanize_path(path)
    agent = Mechanize.new
    begin
      @web_page = agent.get(@domain_link + path)
    rescue Exception => e
      puts "Mechanize Exception for #{path} :: #{e.message}"
    end
  end

  def get_paths(web_page)
    paths = web_page.links.map {|link| link.href.gsub((@domain.protocol + '://' + @domain.url), "") } ## This works when I scrape a single domain, but fails with ".gsub for nil" when I scrape a few domains.
    paths.uniq.each do |path|
      @domain.to_parse << path
    end  
  end

end 

当我抓取单个域时,这种方法有效,但当我刮掉一些域时,.gsub for nil的{​​{1}}失败。

1 个答案:

答案 0 :(得分:1)

您可以将代码包装在另一个类中,然后在您的worker中创建该类的对象:

class ScrapeDomainWrapper
  def initialize(domain_url, domain_id, keywords)
    # ...
  end

  def mechanize_path(path)
    # ...
  end

  def get_paths(web_page)
    # ...
  end
end

你的工人:

class ScrapeDomainWorker
  include Sidekiq::Worker

  def perform(domain_url, domain_id, keywords)
    ScrapeDomainWrapper.new(domain_url, domain_id, keywords)
  end
end

另外,请注意Mechanize::Page#links可能是nil