我正在构建一个工具,以帮助每周自动执行来自多个实验室设置的数据审查。每天都会生成一个制表符分隔的文本文件。每行代表每2秒获取的数据,因此共有43200行和许多列(每个文件为75mb)
我正在使用pandas.readcsv加载七个文本文件,并且仅将我需要的三列提取到pandas数据框中。这比我想要的要慢,但是可以接受。然后,我使用离线Plotly绘制数据以查看交互式图。这是一项计划的任务,设置为每周运行一次。
绘制数据与日期和时间的关系。通常,测试设置会暂时处于离线状态,并且数据中会存在空白。不幸的是,当绘制该图表时,即使测试处于离线状态达数小时或数天,所有数据也是通过线连接的。
防止这种情况的唯一方法是在两个日期之间插入一个日期,其中包含实际数据和所有缺失数据的NaN。我已经很容易地为丢失的数据文件实现了此功能,但是我想对大于特定时间段的数据中的任何差距进行概括。我想出了一个似乎可行的解决方案,但它确实很慢:
# alldata is a pandas dataframe with 302,000 rows and 4 columns
# one datetime column and three float32 columns
alldata_gaps = pandas.DataFrame() #new dataframe with gaps in it
#iterate over all rows. If the datetime difference between
#two consecutive rows is more than one minute, insert a gap row.
for i in range(0, len(alldata)):
alldata_gaps = alldata_gaps.append(alldata.iloc[i])
if alldata.iloc[i+1, 0]-alldata.iloc[i,0] > datetime.timedelta(minutes=1):
Series = pandas.Series({'datetime' : alldata.iloc[i,0]
+datetime.timedelta(seconds=3)})
alldata_gaps = alldata_gaps.append(Series)
print(Series)
有人建议我如何加快此操作的速度,以免花费那么长的时间吗?
Here's a dropbox link to an example data file with only 100 lines
Here's a link to my current script without adding the gap rows
答案 0 :(得分:2)
我的一般想法与jpp的答案相同:除了迭代数据帧(这对于您拥有的数据量来说很慢)之外,您应该仅确定感兴趣的行并对其进行处理。主要区别在于1)将多列变为NA,2)将NA行时间戳调整为周围时间的一半
我在评论中都添加了解释...
# after you read in your data, make sure the time column is actually a datetime
df['datetime'] = pd.to_datetime(df['datetime'])
# calculate the (time) difference between a row and the previous row
df['time_diff'] = df['datetime'].diff()
# create a subset of your df where the time difference is greater than
# some threshold. This will be a dataframe of your empty/NA rows.
# I've set a 2 second threshold here because of the sample data you provided,
# but could be any number of seconds
empty = df[df['time_diff'].dt.total_seconds() > 2].copy()
# calculate the correct timestamp for the NA rows (halfway and evenly spaced)
empty['datetime'] = empty['datetime'] - (empty['time_diff'].shift(-1) / 2)
# set all the columns to NA apart from the datetime column
empty.loc[:, ~empty.columns.isin(['datetime'])] = np.nan
# append this NA/empty dataframe to your original data, and sort by time
df = df.append(empty, ignore_index=True)
df = df.sort_values('datetime').reset_index(drop=True)
# optionally, remove the time_diff column we created at the beginning
df.drop('time_diff', inplace=True, axis=1)
那会给你这样的东西:
答案 1 :(得分:1)
几乎可以肯定,您的瓶颈来自pd.DataFrame.append
:
alldata_gaps = alldata_gaps.append(alldata.iloc[i])
alldata_gaps = alldata_gaps.append(Series)
顺便说一句,您已经混淆地将变量命名为与Pandas对象pd.Series
相同的变量。避免这种歧义是一个好习惯。
很多更有效的解决方案是:
因此,让我们用一个示例数据框刺一下:
# example dataframe setup
df = pd.DataFrame({'Date': ['00:10:15', '00:15:20', '00:15:40', '00:16:50', '00:17:55',
'00:19:00', '00:19:10', '00:19:15', '00:19:55', '00:20:58'],
'Value': list(range(10))})
df['Date'] = pd.to_datetime('2018-11-06-' + df['Date'])
# find gaps greater than 1 minute
bools = (df['Date'].diff().dt.seconds > 60).shift(-1).fillna(False)
idx = bools[bools].index
# Int64Index([0, 2, 3, 4, 8], dtype='int64')
# construct dataframe to append
df_extra = df.loc[idx].copy().assign(Value=np.nan)
# add 3 seconds
df_extra['Date'] = df_extra['Date'] + pd.to_timedelta('3 seconds')
# append to original
res = df.append(df_extra).sort_values('Date')
结果:
print(res)
Date Value
0 2018-11-06 00:10:15 0.0
0 2018-11-06 00:10:18 NaN
1 2018-11-06 00:15:20 1.0
2 2018-11-06 00:15:40 2.0
2 2018-11-06 00:15:43 NaN
3 2018-11-06 00:16:50 3.0
3 2018-11-06 00:16:53 NaN
4 2018-11-06 00:17:55 4.0
4 2018-11-06 00:17:58 NaN
5 2018-11-06 00:19:00 5.0
6 2018-11-06 00:19:10 6.0
7 2018-11-06 00:19:15 7.0
8 2018-11-06 00:19:55 8.0
8 2018-11-06 00:19:58 NaN
9 2018-11-06 00:20:58 9.0