我对此做了一些研究,并尝试让它自己工作,但无济于事。我正在尝试将几个字符串连接在一起,以便我可以从网上下载CSV文件。
这个小小的脚本适用于一只股票。
read.csv("http://financials.morningstar.com/ajax/exportKR2CSV.html?&t=AAPL",header=T,stringsAsFactors = F,skip = 2)[,-c(12)]->spreadsheet
我正在尝试连接这些字符串,但事情对我来说并不合适。
stocks <- c("AXP","BA","CAT","CSCO")
for (s in stocks)
{
paste("read.csv("http://financials.morningstar.com/ajax/exportKR2CSV.html?&t=",s,header=T,stringsAsFactors = F,skip = 2)[,-c(12)]->spreadsheet)
paste("write.table(stockdata, "C:/Users/rshuell001/Desktop/files/",s,".csv", sep=",", row.names=FALSE, col.names=FALSE))
}
Or.....
stocks <- c("AXP","BA","CAT","CSCO")
for (s in stocks)
{
cat("read.csv("http://financials.morningstar.com/ajax/exportKR2CSV.html?&t=",s,header=T,stringsAsFactors = F,skip = 2)[,-c(12)]->spreadsheet)
cat("write.table(stockdata, "C:/Users/rshuell001/Desktop/files/",s,".csv", sep=",", row.names=FALSE, col.names=FALSE))
}
答案 0 :(得分:2)
我们可以使用sprintf
创建vector
个网址。
urls <- sprintf("http://financials.morningstar.com/ajax/exportKR2CSV.html?&t=%s", stocks)
然后,循环浏览链接并阅读
lst <- lapply(urls, read.csv, header=TRUE, stringsAsFactors=FALSE, skip=2)
lst1 <- lapply(lst, `[`, -12)
然后我们可以通过循环list
或者@Richard Scriven提到,来自fread
的{{1}}将是一个选项,因为它有data.table
参数来删除列
drop