当我尝试使用rvest抓取mlb.com交易时,什么也没有返回

时间:2019-05-24 15:58:15

标签: r function dataframe web-scraping rvest

我一直在尝试抓取mlb交易页面(http://mlb.mlb.com/mlb/transactions/index.jsp#month=5&year=2019)来获取每个给定交易的相应日期和文本,但是没有运气。我使用rvest和选择器小工具编写了一个简短的函数,该函数应该为我提供从2001年第一个可用的n到2019年3月的所有显示的表。

我只得到这一系列的错误,什么都没有发生。

这是我的代码,用于从给定网站中抓取数据。

library(tidyverse)
library(rvest)


# breaking the URL into the start and end for easy pasting to fit timespan
url_start = "http://mlb.mlb.com/mlb/transactions/index.jsp#month="
url_end = "&year="

# function which scrapes data

mlb_transactions = function(month, year){

  url = paste0(url_start, month, url_end, year)

  payload = read_html(url) %>%
              html_nodes("td") %>%
                html_table() %>%
                  as.data.frame()

  payload

}

# function run on appropriate dates

mlb_transactions(month = 1:12, year = 2001:2019)

这是我遇到的错误

 Show Traceback

 Rerun with Debug
 Error in doc_parse_file(con, encoding = encoding, as_html = as_html, options = options) : 
  Expecting a single string value: [type=character; extent=19]. 

这是回溯

12.
stop(structure(list(message = "Expecting a single string value: [type=character; extent=19].", 
    call = doc_parse_file(con, encoding = encoding, as_html = as_html, 
        options = options), cppstack = NULL), class = c("Rcpp::not_compatible", 
"C++Error", "error", "condition"))) 
11.
doc_parse_file(con, encoding = encoding, as_html = as_html, options = options) 
10.
read_xml.character(x, encoding = encoding, ..., as_html = TRUE, 
    options = options) 
9.
read_xml(x, encoding = encoding, ..., as_html = TRUE, options = options) 
8.
withCallingHandlers(expr, warning = function(w) invokeRestart("muffleWarning")) 
7.
suppressWarnings(read_xml(x, encoding = encoding, ..., as_html = TRUE, 
    options = options)) 
6.
read_html.default(url) 
5.
read_html(url) 
4.
eval(lhs, parent, parent) 
3.
eval(lhs, parent, parent) 
2.
read_html(url) %>% html_nodes("td") %>% html_table() %>% as.data.frame() 
1.
mlb_transactions(month = 1:12, year = 2001:2019) 

最后一点要注意的是,我的计划是,尽管我还不知道该怎么做,因为在交易表上,并非每笔交易都有其直接离开的日期,但是我可以这样暗示一个日期跨度加载后,每个空日期列都会被填充(如果已填充的话),其正上方就是该列的信息,这会产生某种循环,还是有更好的方法从头开始加载日期?

2 个答案:

答案 0 :(得分:2)

伪代码(与语言无关)

还有一个替代的url构造,可通过查询字符串返回json。查询字符串具有开始和结束日期。

http://lookup-service-prod.mlb.com/json/named.transaction_all.bam?start_date=20010101&end_date=20031231&sport_code=%27mlb%27

通过使用Python进行测试(因此R里程可能会有所不同-我希望以后再添加R示例),您可以一次发出* 2年的请求,并获得包含数据行的json响应。 > *这是更可靠的时间范围。

您可以在2001年至2018年的一个循环中以2的步长构建此循环。

的时间间隔
['2001-2003', '2004-2006', '2007-2009' ,'2010-2012', '2013-2015', '2016-2018]

然后解析json响应以获取感兴趣的数据。示例json响应here

json中的示例行:

{"trans_date_cd":"D","from_team_id":"","orig_asset":"Player","final_asset_type":"","player":"Rafael Roque","resolution_cd":"FIN","final_asset":"","name_display_first_last":"Rafael Roque","type_cd":"REL","name_sort":"ROQUE, RAFAEL","resolution_date":"2001-03-14T00:00:00","conditional_sw":"","team":"Milwaukee Brewers","type":"Released","name_display_last_first":"Roque, Rafael","transaction_id":"94126","trans_date":"2001-03-14T00:00:00","effective_date":"2001-03-14T00:00:00","player_id":"136305","orig_asset_type":"PL","from_team":"","team_id":"158","note":"Milwaukee Brewers released LHP Rafael Roque."}

注意:

允许非批量使用材料,但批量使用需要事先同意。


Python示例:

import requests

for year in range(2001, 2018, 2):       
    r = requests.get('http://lookup-service-prod.mlb.com/json/named.transaction_all.bam?start_date={0}0101&end_date={1}1231&sport_code=%27mlb%27'.format(year,year + 1)).json()
    print(len(r['transaction_all']['queryResults']['row'])) # just to demonstrate response content

len(r['transaction_all']['queryResults']['row'])

给出每个请求的数据行数/事务数(2年)

这将产生以下交易计数:

[163, 153, 277, 306, 16362, 19986, 20960, 23352, 24732]

答案 1 :(得分:2)

这里是R的替代方案-类似于@QHarr的解决方案。以下函数get_datayear作为参数,并提取year;year+1的数据作为开始日期和结束日期

get_data <- function (year) {
  root_url <- 'http://lookup-service-prod.mlb.com'
  params_dates <- sprintf('start_date=%s0101&end_date=%s1231', year, year+1)
  params <- paste('/json/named.transaction_all.bam?&sport_code=%27mlb%27', params_dates, sep = '&')
  js <- jsonlite::fromJSON(paste0(root_url, params))
  return (js)
}
get_processed_data <- function (year) get_data(year=year)$transaction_all$queryResults$row

输出js属于list类,数据存储在$transaction_all$queryResults$row中。

最后,与其他解决方案中相同的循环将输出输出的行数

for (year in seq(2001, 2018, 2)) print(nrow(get_data(year)$transaction_all$queryResults$row))
# [1] 163
# [1] 153
# [1] 277
# [1] 306
# [1] 16362
# [1] 19986
# [1] 20960
# [1] 23352
# [1] 24732