我有一个带有rfid数据的.csv数据集,其中有第二个人进行了互动:
tag_me是person 1的变量,tag_them是你在那一秒遇到的人的名字,time_local_s是交互发生的时间。 rfid在19:00:00开始录制,因此第一次互动录制于19:22:36(19:00:00 + 1356秒)。
tag_me,tag_them,time_local_s
0x597E5627,0x3C992634,1356
0x597E5627,0x3C992634,1360
0x597E5627,0x3C992634,1361
0x597E5627,0x3C992634,1362
0x597E5627,0x3C992634,1363
0x597E5627,0x7DA8FFB0,1364
0x597E5627,0x3C992634,1365
0x597E5627,0x3C992634,1365
0x597E5627,0x3C992634,1366
0x597E5627,0x7DA8FFB0,1366
0x597E5627,0x36570942,1366
0x597E5627,0x3C3A21AD,1369
0x597E5627,0x06497CA4,1370
0x597E5627,0x06497CA4,1372
0x597E5627,0x06497CA4,1372
0x597E5627,0x06497CA4,1374
0x597E5627,0x06497CA4,1374
0x597E5627,0x064F5882,1379
我想将每个互动分组到一行,记录互动开始,结束的时间和花费的时间。因此我可以过滤一定的阈值(两个rfid看到对方2秒钟当然不是真正的互动。
tag_me,tag_them,time_start,time_end,total_time
0x597E5627,0x3C992634,1356,1363,7
0x597E5627,0x7DA8FFB0,1364,1363,1
0x597E5627,0x3C992634,1365,1366,1
0x597E5627,0x7DA8FFB0,1366,1366,1
0x597E5627,0x36570942,1366,1366,1
0x597E5627,0x3C3A21AD,1369,1369.1
0x597E5627,0x06497CA4,1370,1374,4
0x597E5627,0x064F5882,1379,1379,1
到目前为止我试过这个:
data = []
with open('timemerger.csv') as f:
for line in f:
data.append(line)
past_interactions = []
interactions = []
now = -1
new_data = []
for line in enumerate(data):
if line["time_local_s"] > now:
for tag_them, indices in past_interactions:
if tag_them not in data:
interactions.append(entry["tag_them"])
---------------编辑7-5-2018 ----------
import pandas as pd
df = pd.read_csv('filter20seconden1.csv')
cols = df.columns.difference(['time_start', 'time_end'])
grps = df.time_start.sub(df.time_end.shift()).gt(20).cumsum()
gpby = df.groupby(grps)
new = gpby.agg(dict(time_start='min',
time_end='max')).join(gpby[cols].sum())
答案 0 :(得分:1)
正如您在评论中提到的那样,您不介意使用pandas
,这是一个解决方案。它有点长,可能有一种更有效的方式,但我认为它有效:
import pandas as pd
# Read in your csv
df = pd.read_csv('timemerger.csv')
# Create a new column with an "interaction number"
df = df.assign(interaction_num=(df.tag_them != df.tag_them.shift()).cumsum())
# Groupby the interaction number, and extract the min and max times:
gb = (df.groupby('interaction_num')
.apply(
lambda x: pd.Series([x['time_local_s'].min(),
x['time_local_s'].max()]
))
.rename(columns={0:'time_start', 1:'time_end'}))
# Merge the min and max times per interaction number with your original dataframe:
df = df.merge(gb, left_on = 'interaction_num', right_index=True)
# Create a new column for length of time, groupby interaction again, and take first value:
df = (df.assign(total_time = df.time_end - df.time_start)
.groupby('interaction_num')
.first()
.drop('time_local_s', axis=1))
# Finally, save your dataframe:
df.to_csv('output.csv', index=None)
您的新output.csv
将如下所示:
tag_me,tag_them,time_start,time_end,total_time
0x597E5627,0x3C992634,1356,1363,7
0x597E5627,0x7DA8FFB0,1364,1364,0
0x597E5627,0x3C992634,1365,1366,1
0x597E5627,0x7DA8FFB0,1366,1366,0
0x597E5627,0x36570942,1366,1366,0
0x597E5627,0x3C3A21AD,1369,1369,0
0x597E5627,0x06497CA4,1370,1374,4
0x597E5627,0x064F5882,1379,1379,0
请注意,当互动开始并在同一秒结束时有零,而您想要的结果为1
。在df.replace({'total_time':{0:1}}, inplace=True)
之前使用to_csv
很容易改变(我保留在那里因为我认为你的数据正在失去零秒互动与1秒互动之间的差异)。
<强>击穿强>:
第一个assign()
和.shift()
为单独的互动创建了一个列:
tag_me tag_them time_local_s interaction_num
...
3 0x597E5627 0x3C992634 1362 1
4 0x597E5627 0x3C992634 1363 1
5 0x597E5627 0x7DA8FFB0 1364 2
6 0x597E5627 0x3C992634 1365 3
7 0x597E5627 0x3C992634 1365 3
...
然后,.groupby
和lambda
函数获取互动的min
和max
次,并将其重命名为time_start
和{{1} }:
time_end
然后,您将 time_start time_end
interaction_num
1 1356 1363
2 1364 1364
3 1365 1366
4 1366 1366
...
的结果与原始数据框合并,其中groupby
与索引匹配,从而产生:
interaction_num
最后,您再次使用...
3 0x597E5627 0x3C992634 1362 1 1356 1363
4 0x597E5627 0x3C992634 1363 1 1356 1363
5 0x597E5627 0x7DA8FFB0 1364 2 1364 1364
6 0x597E5627 0x3C992634 1365 3 1365 1366
7 0x597E5627 0x3C992634 1365 3 1365 1366
...
和assign
groupby
再次创建时间差异列,并删除不必要的interaction_num
列,获取最终的数据帧。
答案 1 :(得分:0)
尝试使用dataframe shift和groupby,如下所示
import pandas
df = pd.read_csv('timemerger.csv')
g = (df['tag_me'] != df.shift().fillna(method='bfill')['tag_me']).cumsum().
rename('group')
print(df.groupby(['tag_them','tag_me',g])['time_local_s'].agg(['min','max']).reset
_index().rename(columns={'min':'time_start','max':'time_end'}).drop('group',axis
=1))