嵌套的for循环报废在R中表现出奇怪的行为

时间:2019-04-18 03:04:20

标签: r rvest

我正在尝试抓取2000-20012001-20022002-2003个季节的曲棍球数据,其中每个季节都包含分布在许多页面上的表格。这是我的抓取功能(ushl_scrape):

ushl_scrape <- function(season, page) {

  # Set url of webpage
  custom_url <- paste0("https://www.eliteprospects.com/league/ushl/stats/", season, "?sort=ppg&page=", page)

  # Scrape
  url <- read_html(custom_url)

  ushl <- url %>% 
    html_node(xpath = "/html/body/section[2]/div/div[1]/div[4]/div[3]/div[1]/div/div[4]/table") %>% 
    html_table() %>% 
    filter(Player != "") %>% 
    mutate(season = season)

  # Return table
  ushl
}

然后我使用此for循环在三个不同的季节中运行ushl_scrape。为了解释这个for循环,由于我不知道每个季节分配了多少页数据,因此我在1:10的页面上抓取数据,并且当我打到0行的页面时,我继续进行下一年

# Total years
total_years <- paste0(2000:2002, "-", 2001:2003)

# Page
page_num <- c(1:10)

final_list <- vector("list", length = length(total_years))
by_year <- vector("list")


for (ii in seq_along(total_years)) {

  # Sleep for 2 seconds to not bombard server
  Sys.sleep(2)

  for (jj in seq_along(page_num)) {

    Sys.sleep(2)

    # Scrape season[ii] and page_num[jj]
    scraped_table <- ushl_scrape(season = total_years[ii], page = page_num[jj])

    # If scraped table has no rows, exit for loop!
    if (nrow(scraped_table) == 0) {
      break
    } else{
      by_year[[jj]] <- scraped_table
    }
  }

  # Store final_df inside final_list
  final_df <- bind_rows(by_year)
  final_list[[ii]] <- final_df

}

# Finally, bind rows all the elements in list
scraped_df <- bind_rows(final_list)

scraped_df中,我看到了所有三个季节的数据,但最后,我看到添加了重复的2001-2002个季节数据...

  1. 为什么我的for循环在最后添加2001-2002赛季数据?
  2. 我该如何解决?

1 个答案:

答案 0 :(得分:0)

是的,重复了一些行。按原样运行代码,可以得到46个重复行。

sum(duplicated(scraped_df))
#[1] 46

问题是您必须为外部by_year循环中的每个total_year初始化for。由于您未执行此操作,因此未清除先前迭代中的by_year值,因此也不会清除重复项。

for (ii in seq_along(total_years)) {

  # Sleep for 2 seconds to not bombard server
  Sys.sleep(2)
  by_year <- vector("list") # <- Added this line
  for (jj in seq_along(page_num)) {    
      Sys.sleep(2)

     # Scrape season[ii] and page_num[jj]
     scraped_table <- ushl_scrape(season = total_years[ii], page = page_num[jj])
      #browser()
     # If scraped table has no rows, exit for loop!
     if (nrow(scraped_table) == 0) {
        break
     } else{
         by_year[[jj]] <- scraped_table
     }
   }

 # Store final_df inside final_list
 final_df <- bind_rows(by_year)
 final_list[[ii]] <- final_df

}

scraped_df <- bind_rows(final_list)

我们现在可以检查重复的行

sum(duplicated(scraped_df))
#[1] 0