我有以下列表:
library(rjson)
j <- fromJSON(file='https://esgf-data.dkrz.de/esg-search/search/?offset=0&limit=1000&type=Dataset&replica=false&latest=true&project=CORDEX&domain=EUR-11&experiment=rcp85&time_frequency=day&facets=rcm_name%2Cproject%2Cproduct%2Cdomain%2Cinstitute%2Cdriving_model%2Cexperiment%2Cexperiment_family%2Censemble%2Crcm_version%2Ctime_frequency%2Cvariable%2Cvariable_long_name%2Ccf_standard_name%2Cdata_node&format=application%2Fsolr%2Bjson')
我有兴趣从这个组件中提取数据:j$response$docs
,这是一个列表列表。 “内部”列表都应该具有相同的名称。
我想将输出保存为data.frame()
或tibble()
。
以下内容适用于少数选定的变量,并提供所需的输出:
nmod <- length(j$response$docs)
for (i in 1:nmod) {
#select one list at a time
j1 <- j$response$docs[[i]]
tmp <- data.frame(variable=j1$variable,
variable_long_name=j1$variable_long_name,
rcm_name=j1$rcm_name,
driving_model=j1$driving_model,
cf_standard_name=j1$cf_standard_name
)
#join them
if (i==1) {
d <- tmp
} else {
d <- rbind(d, tmp)
}
}
但是,我想知道是否有更优雅和有效的方式,可能使用tidyr
,dplyr
或purrr
,这也可以让我选择所有¨列,而不是那里选择的少数。
答案 0 :(得分:2)
您可以在包 purrr 的帮助下完成此操作。我认为at_depth
可能在这里工作,但我最终使用嵌套的map_df
。
library(purrr)
您的变量长度不同,因此首先要确保每个变量的长度为1.这可以通过将内部列表的每个元素与paste
折叠来完成。我用逗号分隔符。通过map_df
执行此操作会返回1行tibble
。
以下是第一个内部列表的示例。
map_df(j$response$docs[[1]], paste, collapse = ",")
现在我们可以遍历外部列表,为每个列表创建一行tibble
。我们使用map_df
将这些中的每一个绑定在一起。输出是832行tibble
,每列一行。我使用.id
参数将一个分组变量添加到结果中,这可能不需要。
d1 = map_df(j$response$docs, ~map_df(.x, paste, collapse = ","))
d1
# A tibble: 832 × 45
group id version
<chr> <chr> <chr>
1 1 cordex.output.EUR-11.DMI.ICHEC-EC-EARTH.rcp85.r3i1p1.HIRHAM5.v1.day.clh.v20131119|cordexesg.dmi.dk 20131119
2 2 cordex.output.EUR-11.DMI.ICHEC-EC-EARTH.rcp85.r3i1p1.HIRHAM5.v1.day.clivi.v20131119|cordexesg.dmi.dk 20131119
3 3 cordex.output.EUR-11.DMI.ICHEC-EC-EARTH.rcp85.r3i1p1.HIRHAM5.v1.day.rsds.v20131119|cordexesg.dmi.dk 20131119
4 4 cordex.output.EUR-11.DMI.ICHEC-EC-EARTH.rcp85.r3i1p1.HIRHAM5.v1.day.rlds.v20131119|cordexesg.dmi.dk 20131119
5 5 cordex.output.EUR-11.DMI.ICHEC-EC-EARTH.rcp85.r3i1p1.HIRHAM5.v1.day.rsus.v20131119|cordexesg.dmi.dk 20131119
6 6 cordex.output.EUR-11.DMI.ICHEC-EC-EARTH.rcp85.r3i1p1.HIRHAM5.v1.day.rlus.v20131119|cordexesg.dmi.dk 20131119
7 7 cordex.output.EUR-11.DMI.ICHEC-EC-EARTH.rcp85.r3i1p1.HIRHAM5.v1.day.rsdt.v20131119|cordexesg.dmi.dk 20131119
8 8 cordex.output.EUR-11.DMI.ICHEC-EC-EARTH.rcp85.r3i1p1.HIRHAM5.v1.day.rsut.v20131119|cordexesg.dmi.dk 20131119
9 9 cordex.output.EUR-11.DMI.ICHEC-EC-EARTH.rcp85.r3i1p1.HIRHAM5.v1.day.rlut.v20131119|cordexesg.dmi.dk 20131119
10 10 cordex.output.EUR-11.DMI.ICHEC-EC-EARTH.rcp85.r3i1p1.HIRHAM5.v1.day.psl.v20131119|cordexesg.dmi.dk 20131119
# ... with 822 more rows, and 42 more variables:
如果要为大于长度1的变量获取多行,例如access
和experiment_family
,则可以使用tidyr::separate_rows
将数据分成多行。
tidyr::separate_rows(d1, experiment_family)
答案 1 :(得分:0)
代替rjson
使用此内容:
library(jsonlite)
j <- jsonlite::fromJSON('https://esgf-data.dkrz.de/esg-search/search/?offset=0&limit=1000&type=Dataset&replica=false&latest=true&project=CORDEX&domain=EUR-11&experiment=rcp85&time_frequency=day&facets=rcm_name%2Cproject%2Cproduct%2Cdomain%2Cinstitute%2Cdriving_model%2Cexperiment%2Cexperiment_family%2Censemble%2Crcm_version%2Ctime_frequency%2Cvariable%2Cvariable_long_name%2Ccf_standard_name%2Cdata_node&format=application%2Fsolr%2Bjson')
# The names you wan to find in the nested returned data
look_for <- c('variable','variable_long_name' ,
'rcm_name','driving_model',
'cf_standard_name')
new_df <- as.data.frame(sapply(look_for, function(i){
unlist(j$response$docs[[i]])
}))
str(new_df)
'data.frame': 832 obs. of 5 variables:
$ variable : chr "clh" "clivi" "rsds" "rlds" ...
$ variable_long_name: chr "High Level Cloud Fraction" "Ice Water Path" "Surface Downwelling Shortwave Radiation" "Surface Downwelling Longwave Radiation" ...
$ rcm_name : chr "HIRHAM5" "HIRHAM5" "HIRHAM5" "HIRHAM5" ...
$ driving_model : chr "ICHEC-EC-EARTH" "ICHEC-EC-EARTH" "ICHEC-EC-EARTH" "ICHEC-EC-EARTH" ...
$ cf_standard_name : chr "cloud_area_fraction_in_atmosphere_layer" "atmosphere_cloud_ice_content" "surface_downwelling_shortwave_flux_in_air" "surface_downwelling_longwave_flux_in_air" ...