我想将第一个函数返回的URL数组传递给第二个函数,但是我不确定如何执行此操作。
require 'open-uri'
require 'nokogiri'
require 'byebug'
def fetch_recipe_urls
base_url = 'https://cooking.nytimes.com'
easy_recipe_url = 'https://cooking.nytimes.com/search?q=easy'
easy_searchpage = Nokogiri::HTML(open(easy_recipe_url))
recipes = easy_searchpage.search('//article[@class="card recipe-card"]/@data-url')
recipes_url_array = recipes.map do |recipe|
uri = URI.parse(recipe.text)
uri.scheme = "http"
uri.host = "cooking.nytimes.com"
uri.query = nil
uri.to_s
end
end
def scraper(url)
html_file = open(url).read
html_doc = Nokogiri::HTML(html_file)
recipes = Array.new
recipe = {
title: html_doc.css('h1.recipe-title').text.strip,
time: html_doc.css('span.recipe-yield-value').text.split("servings")[1],
steps: html_doc.css('ol.recipe-steps').text.split.join(" "),
ingredients: html_doc.css('ul.recipe-ingredients').text.split.join(" ")
}
recipes << recipe
end
答案 0 :(得分:1)
由于调用fetch_recipe_urls
后有一个数组,因此可以为内部的每个URL迭代并调用scraper
:
def scraper(url)
html_file = open(url).read
html_doc = Nokogiri::HTML(html_file)
{
title: html_doc.css('h1.recipe-title').text.strip,
time: html_doc.css('span.recipe-yield-value').text.split("servings")[1],
steps: html_doc.css('ol.recipe-steps').text.split.join(" "),
ingredients: html_doc.css('ul.recipe-ingredients').text.split.join(" ")
}
end
fetch_recipe_urls.map { |url| scraper(url) }
但是我实际上将代码的结构设计为:
BASE_URL = 'https://cooking.nytimes.com/'
def fetch_recipe_urls
page = Nokogiri::HTML(open(BASE_URL + 'search?q=easy'))
recipes = page.search('//article[@class="card recipe-card"]/@data-url')
recipes.map { |recipe_node| BASE_URL + URI.parse(recipe_node.text).to_s }
end
def scrape(url)
html_doc = Nokogiri::HTML(open(url).read)
{
title: html_doc.css('h1.recipe-title').text.strip,
time: html_doc.css('span.recipe-yield-value').text.split("servings")[1],
steps: html_doc.css('ol.recipe-steps').text.split.join(" "),
ingredients: html_doc.css('ul.recipe-ingredients').text.split.join(" ")
}
end
fetch_recipe_urls.map { |url| scrape(url) }
您也可以在scrape
内致电scraper
/ fetch_recipe_urls
,但我建议采用单一的责任方法。一个更好的主意是使这个OOP并构造一个Scraper
类和一个CookingRecipe
使其更具惯性。
答案 1 :(得分:0)
或者如果您要将数组传递给刮板...
def fetch_recipe_urls
...
recipes = scraper(recipes_url_array)
end
def scraper(urls)
recipes = []
urls.each do |url|
html_file = open(url).read
html_doc = Nokogiri::HTML(html_file)
recipe = {
title: html_doc.css('h1.recipe-title').text.strip,
time: html_doc.css('span.recipe-yield-value').text.split("servings")[1],
steps: html_doc.css('ol.recipe-steps').text.split.join(" "),
ingredients: html_doc.css('ul.recipe-ingredients').text.split.join(" ")
}
recipes << recipe
end
recipes
end