我的问题是将数据帧分为多个数据帧。 原始数据帧显示在[ FIGURE_1 ]中。应该将其拆分为某个值,例如NaN [ FIGURE_2 ]。
我的普通数据框具有超过一百万行和16列,因此,我需要一个性能优化的解决方案。
我紧急需要分割,以便以后处理。
FIGURE_1 当前数据框
PacketID TraceTime Size
0 0.3948 -- --
1 0.3949 01.01.1970 00:12:39.298 77
2 0.3950 01.01.1970 00:12:39.298 80
3 0.3951 01.01.1970 00:12:39.315 81
4 0.3952 01.01.1970 00:12:39.335 78
5 0.3953 01.01.1970 00:12:39.335 71
. . . . .
. . . . .
395926 7.11074 01.01.1970 00:48:42.829 1666
395927 7.11075 01.01.1970 00:48:42.829 57
395928 7.11076 01.01.1970 00:48:42.851 57
395929 #----- END: log_0000.log: session #0
395930 #----- BEGIN: log_0000.log: session #1
395931 PacketID TraceTime Size
395932 7.14891 -- --
395933 7.14892 01.01.1970 00:00:19.313 80
395934 7.14893 01.01.1970 00:00:19.313 61
. . . . .
. . . . .
753533 13.19876 01.01.1970 00:31:56.374 60
753534 13.19877 01.01.1970 00:31:56.380 57
753535 13.19878 01.01.1970 00:31:56.380 57
753536 #----- END: log_0000.log: session #1
753537 #----- BEGIN: log_0000.log: session #2
753538 PacketID TraceTime Size
753539 13.23802 -- --
753540 13.23803 01.01.1970 00:00:48.777 17
753541 13.23804 01.01.1970 00:00:48.802 1
and so on...
FIGURE_2 所需的数据帧
df_1 =
PacketID TraceTime Size
0 0.3948 -- --
1 0.3949 01.01.1970 00:12:39.298 77
2 0.3950 01.01.1970 00:12:39.298 80
. . . . .
. . . . .
395919 7.11067 01.01.1970 00:48:42.602 38
395920 7.11068 01.01.1970 00:48:42.602 54
395921 7.11069 01.01.1970 00:48:42.602 38
395922 7.11070 01.01.1970 00:48:42.629 57
df_2 =
395931 PacketID TraceTime Size
395932 7.14891 -- --
395933 7.14892 01.01.1970 00:00:19.313 80
395934 7.14893 01.01.1970 00:00:19.313 61
395935 7.14894 01.01.1970 00:00:19.313 110
. . . . .
. . . . .
753532 13.19875 01.01.1970 00:31:56.374 63
753533 13.19876 01.01.1970 00:31:56.374 60
753534 13.19877 01.01.1970 00:31:56.380 57
753535 13.19878 01.01.1970 00:31:56.380 57
df_3 =
753538 PacketID TraceTime Size
753539 13.23802 -- --
753540 13.23803 01.01.1970 00:00:48.777 17
753541 13.23804 01.01.1970 00:00:48.802 1
and so on...
我已经有一个选项[ FIGURE_3 ],但已弃用,以后将其删除。
FIGURE_3
Python:
dense_ts = df['TraceTime']
sparse_ts = dense_ts.to_sparse()
block_locs = zip(sparse_ts.sp_index.blocs, sparse_ts.sp_index.blengths)
blocks = [dense_ts.iloc[start:(start + length - 1)] for (start, length) in block_locs]
Warning:
C:\Users\andre\Anaconda3\lib\site-packages\ipykernel_launcher.py:15: FutureWarning: Series.to_sparse is deprecated and will be removed in a future version from ipykernel import kernelapp as app
答案 0 :(得分:0)
如果需要将包含所有NaN的数据帧连续分成几组,请采用以下方法:
#create groups by comparing to null
df['group'] = df.isnull().all(axis=1).cumsum()
# Use dictionary comprehension together with loc to select the relevant group
d = {i: df.loc[df.group == i, ['PacketID', 'TraceTime','Size']] for i in range(1, df.group.iat[-1])}