使用R

时间:2017-11-20 20:43:47

标签: r web-scraping finance stock

我试图从Finviz中删除一些股票关键数据。我应用了原始问题中的代码:Web scraping of key stats in Yahoo! Finance with R。为了收集尽可能多的股票的统计数据,我创建了一个股票代码和描述列表,如下所示:

Symbol Description
A      Agilent Technologies
AAA    Alcoa Corp
AAC    Aac Holdings Inc
BABA   Alibaba Group Holding Ltd
CRM    Salesforce.Com Inc
...

我选择了第一列并将其存储为R中的一个字符并将其称为股票。然后我应用了代码:

for (s in stocks) {
url <- paste0("http://finviz.com/quote.ashx?t=", s)
webpage <- readLines(url)
html <- htmlTreeParse(webpage, useInternalNodes = TRUE, asText = TRUE)
tableNodes <- getNodeSet(html, "//table")

# ASSIGN TO STOCK NAMED DFS
assign(s, readHTMLTable(tableNodes[[9]], 
                      header= c("data1", "data2", "data3", "data4", "data5", "data6",
                                "data7", "data8", "data9", "data10", "data11", "data12")))

# ADD COLUMN TO IDENTIFY STOCK 
df <- get(s)
df['stock'] <- s
assign(s, df)
}

# COMBINE ALL STOCK DATA 
stockdatalist <- cbind(mget(stocks))
stockdata <- do.call(rbind, stockdatalist)
# MOVE STOCK ID TO FIRST COLUMN
stockdata <- stockdata[, c(ncol(stockdata), 1:ncol(stockdata)-1)]

然而,对于一些股票,Finviz没有针对他们的页面,我得到这样的错误按摩:

Error in file(con, "r") : cannot open the connection
In addition: Warning message:
In file(con, "r") :
cannot open URL 'http://finviz.com/quote.ashx?t=AGM.A': HTTP status was '404 
Not Found'

有很多股票存在这种情况所以我无法手动将其从我的列表中删除。有没有办法跳过这些股票的页面?提前谢谢!

1 个答案:

答案 0 :(得分:1)

也许这些内容中的某些内容?在使用你的forloop之前尝试过滤股票。

    library(tidyverse)

#AGM.A should produce error
    stocks <- c("AXP","BA","CAT","AGM.A")
    urls <- paste0("http://finviz.com/quote.ashx?t=", stocks)

#Test urls with possibly first and find out NAs
    temp_ind <- map(urls, possibly(readLines, otherwise = NA_real_))
    ind <- map_lgl(map(temp_ind, c(1)), is.na)
    ind <- which(ind == TRUE)
    filter.stocks <- stocks[-ind]

#AGM.A is removed and you can just insert stocks which work to for loop.
        filter.stocks
    [1] "AXP" "BA"  "CAT"

正如statxiong指出url.exist这里的版本更简单:

library(RCurl)
library(tidyverse)

stocks[map_lgl(urls, url.exists)]