我使用R和RStudio分析GTFS公共交通信息并使用ggplot2创建时间表范围图。代码目前工作正常但速度很慢,这在处理非常大的CSV时会出现问题,这种情况通常就是这样。
代码中最慢的部分如下(带有一些上下文):for循环遍历数据帧并将每个唯一的行程子集设置为临时数据帧,从中可以获得极端到达和离开值(第一个和最后一个)提取行:
# Creates an empty df to contain trip_id, trip start and trip end times
Trip_Times <- data.frame(Trip_ID = character(), Departure = character(), Arrival = character(), stringsAsFactors = FALSE)
# Creates a vector containing all trips of the analysed day
unique_trips = unique(stop_times$trip_id)
# Iterates through stop_times for each unique trip_id and populates previously created data frame
for (i in seq(from = 1, to = length(unique_trips), by = 1)) {
temp_df <- subset(stop_times, trip_id == unique_trips[i])
Trip_Times[nrow(Trip_Times) + 1, ] <- c(temp_df$trip_id[[1]], temp_df$departure_time[[1]], temp_df$arrival_time[[nrow(temp_df)]])
}
stop_times
df如下所示,一些包含超过250万行的Feed提供大约200k个独特的行程,因此200k循环迭代......
head(stop_times)
trip_id arrival_time departure_time stop_sequence
1 011_0840101_A14 7:15:00 7:15:00 1
2 011_0840101_A14 7:16:00 7:16:00 2
3 011_0840101_A14 7:17:00 7:17:00 3
4 011_0840101_A14 7:18:00 7:18:00 4
5 011_0840101_A14 7:19:00 7:19:00 5
6 011_0840101_A14 7:20:00 7:20:00 6
是否有人能够告诉我如何优化此代码以获得更快的结果。我不相信apply
可以在这里使用,但我可能错了。
答案 0 :(得分:2)
使用dplyr
...
library(dplyr)
Trip_Times <- stop_times %>%
group_by(trip_id) %>%
summarise(departure_time=first(departure_time),
arrival_time=last(arrival_time))
答案 1 :(得分:0)
我们可以使用<Invoices xmlns="http://example.com/">
<invoice>
<ContractorName>TEST contractor</ContractorName>
<Invoice_Number>12345</Invoice_Number>
<Invoice_Date>2017-05-20</Invoice_Date>
<Invoice_Amount>100.00</Invoice_Amount>
<Invoice_Hyperlink>https://example.com/path/files/file.pdf</Invoice_Hyperlink>
<Local_File>D:\Invoices_Downloaded\12345_2017-05-20.pdf</Local_File>
</invoice>
<invoice>
<ContractorName>TEST contractor 2</ContractorName>
<Invoice_Number>98765</Invoice_Number>
<Invoice_Date>2017-05-20</Invoice_Date>
<Invoice_Amount>1000.00</Invoice_Amount>
</invoice>
</invoices>
data.table