我有一个具有以下结构的数据框:
event_timestamp message_number an_robot check
2015-04-15 12:09:39 10125 robot_7 False
2015-04-15 12:09:41 10053 robot_4 True
2015-04-15 12:09:44 10156_ad robot_7 True
2015-04-15 12:09:47 20205 robot_108 False
2015-04-15 12:09:51 10010 robot_38 True
2015-04-15 12:09:54 10012 robot_65 True
2015-04-15 12:09:59 10011 robot_39 True
2015-04-15 12:10:01 87954 robot_2 False
......etc
检查列可让您深入了解是否应以这种方式合并该行:
event timestamp: first
message number: combine (e.g., 10053,10156)
an_robot: combine (e.g., robot_4, robot_7)
check: can be removed after the operation.
到目前为止,我已经成功使用groupby在检查列中获取True和False值的正确值:
df.groupby(by='check').agg({'event_timestamp':'first',
'message_number':lambda x: ','.join(x),
'an_robot':lambda x: ','.join(x)}.reset_index()
输出:
check event_timestamp message_number an_robot
0 False 2015-04-15 12:09:39 10125,10053,..,87954 robot_7,robot_4, ... etc
1 True 2015-04-15 12:09:51 10010,10012 robot_38,robot_65
但是,理想的最终结果如下。 10053 and 10156_ad
行被合并,而10010,10012,10011
行被合并。在完整的数据框中,一个序列的最大长度可以是5。我有一个单独的数据框,其中包含这些规则(例如10010、10012、10011规则)。
event_timestamp message_number an_robot
2015-04-15 12:09:39 10125 robot_7
2015-04-15 12:09:41 10053,10156_ad robot_4,robot_7
2015-04-15 12:09:47 20205 robot_108
2015-04-15 12:09:51 10010,10012,10011 robot_38,robot_65,robot_39
2015-04-15 12:10:01 87954 robot_2
我该如何实现?
-编辑-
具有单独规则的数据集如下:
sequence support
10053,10156,20205 0.94783
10010,10012 0.93322
10010,10033 0.93211
10053,10032 0.92222
etc....
确定何时检查行为true或false的代码:
def find_drops(seq, df):
if seq:
m = np.logical_and.reduce([df.message_number.shift(-i).eq(seq[i]) for i in range(len(seq))])
if len(seq) == 1:
return pd.Series(m, index=df.index)
else:
return pd.Series(m, index=df.index).replace({False: np.NaN}).ffill(limit=len(seq)-1).fillna(False)
else:
return pd.Series(False, index=df.index)
如果我随后运行df['check'] = find_drops(['10010', '10012', '10011'], df)
,我将获得这些行的True校验列。如果可以使用规则在数据框的每一行中运行此代码,然后将其与提供的代码合并,那就太好了。
-新代码4-17-2019-
df = """event_timestamp|message_number|an_robot
2015-04-15 12:09:39|10125|robot_7
2015-04-15 12:09:41|10053|robot_4
2015-04-15 12:09:44|10156_ad|robot_7
2015-04-15 12:09:47|20205|robot_108
2015-04-15 12:09:48|45689|robot_23
2015-04-15 12:09:51|10010|robot_38
2015-04-15 12:09:54|10012|robot_65
2015-04-15 12:09:58|98765|robot_99
2015-04-15 12:09:59|10011|robot_39
2015-04-15 12:10:01|87954|robot_2"""
df = pd.read_csv(io.StringIO(df), sep='|')
df1 = """sequence|support
10053,10156_ad,20205|0.94783
10010,10012|0.93322
10011,87954|0.92222
"""
df1 = pd.read_csv(io.StringIO(df1), sep='|')
patterns = df1['sequence'].str.split(',')
used_idx = []
c = ['event_timestamp','message_number','an_robot']
def find_drops(seq):
if seq:
m = np.logical_and.reduce([df.message_number.shift(-i).eq(seq[i]) for i in range(len(seq))])
if len(seq) == 1:
df2 = df.loc[m, c].assign(g = df.index[m])
used_idx.extend(df2.index.tolist())
return df2
else:
m1 = (pd.Series(m, index=df.index).replace({False: np.NaN})
.ffill(limit=len(seq)-1)
.fillna(False))
df2 = df.loc[m1, c]
used_idx.extend(df2.index.tolist())
df2['g'] = np.where(df2.index.isin(df.index[m]), df2.index, np.nan)
return df2
out = (pd.concat([find_drops(x) for x in patterns])
.assign(g = lambda x: x['g'].ffill())
.groupby(by=['g']).agg({'event_timestamp':'first',
'message_number':','.join,
'an_robot':','.join})
.reset_index(drop=True))
c = ['event_timestamp','message_number','an_robot']
df2 = df[~df.index.isin(used_idx)]
df2 = pd.DataFrame([[df2['event_timestamp'].iat[0],
','.join(df2['message_number']),
','.join(df2['an_robot'])]], columns=c)
fin = pd.concat([out, df2], ignore_index=True)
fin.event_timestamp = pd.to_datetime(fin.event_timestamp)
fin = fin.sort_values('event_timestamp')
fin
输出为:
event_timestamp message_number an_robot
2015-04-15 12:09:39 10125,45689,98765,12345 robot_7,robot_23,robot_99
2015-04-15 12:09:41 10053,10156_ad,20205 robot_4,robot_7,robot_108
2015-04-15 12:09:51 10010,10012 robot_38,robot_65
2015-04-15 12:09:59 10011,87954 robot_39,robot_2
应为:
event_timestamp message_number an_robot
2015-04-15 12:09:39 10125 robot_7
2015-04-15 12:09:41 10053,10156_ad,20205 robot_4,robot_7,robot_108
2015-04-15 12:09:48 45689 robot_23
2015-04-15 12:09:51 10010,10012 robot_38,robot_65
2015-04-15 12:09:58 98765 robot_99
2015-04-15 12:09:59 10011,87954 robot_39,robot_2
2015-04-15 12:10:03 12345 robot_1
答案 0 :(得分:1)
在对消息编号进行分组之前,可以对其进行分类。最好将这些分类规则放在一个数据框中,每个数字1个分类。
class_df = pd.DataFrame(data={'message_number': ['10010', '10012', '10011', '10053', '10156_ad'],
'class': ['a', 'a', 'a', 'b', 'b']})
然后可以合并它们
results = pd.merge(df, class_df, on=['message_number'], how='left)
然后您可以按班级分组并检查
results.groupby(by=['check', 'class']).agg({'event_timestamp':'first',
'message_number':lambda x: ','.join(x),
'an_robot':lambda x: ','.join(x)}.reset_index()
答案 1 :(得分:1)
问题更加复杂,因此进行了综合更改。
第一步是预处理-仅过滤Series.isin
和boolean indexing
依次出现的值:
patterns = df1['sequence'].str.split(',')
print (patterns)
#flatten lists to sets
flatten = set([y for x in patterns for y in x])
#print (flatten)
df1 = df[df['message_number'].isin(flatten)]
#print (df1)
第一个解决方案已修改this answer-为长度大于1的序列添加了groupby
,为每个值调用函数,最后通过concat
进行连接:
def rolling_window(a, window):
shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
strides = a.strides + (a.strides[-1],)
c = np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
return c
used_idx = []
def agg_pattern(seq):
if seq:
N = len(seq)
arr = df1['message_number'].values
b = np.all(rolling_window(arr, N) == seq, axis=1)
c = np.mgrid[0:len(b)][b]
d = [i for x in c for i in range(x, x+N)]
used_idx.extend(df1.index.values[d])
m = np.in1d(np.arange(len(arr)), d)
di = {'event_timestamp':'first','message_number':','.join, 'an_robot':','.join}
if len(seq) == 1:
return df1.loc[m, ['event_timestamp','message_number','an_robot']]
else:
df2 = df1[m]
return df2.groupby(np.arange(len(df2)) // N).agg(di)
out = pd.concat([agg_pattern(x) for x in patterns], ignore_index=True)
您的解决方案应更改为用于上一步中用于g
的创建助手列grouping
:
used_idx = []
c = ['event_timestamp','message_number','an_robot']
def find_drops(seq):
if seq:
m = np.logical_and.reduce([df1.message_number.shift(-i).eq(seq[i]) for i in range(len(seq))])
if len(seq) == 1:
df2 = df1.loc[m, c].assign(g = df1.index[m])
used_idx.extend(df2.index.tolist())
return df2
else:
m1 = (pd.Series(m, index=df1.index).replace({False: np.NaN})
.ffill(limit=len(seq)-1)
.fillna(False))
df2 = df1.loc[m1, c]
used_idx.extend(df2.index.tolist())
df2['g'] = np.where(df2.index.isin(df1.index[m]), df2.index, np.nan)
return df2
out = (pd.concat([find_drops(x) for x in patterns])
.assign(g = lambda x: x['g'].ffill())
.groupby(by=['g']).agg({'event_timestamp':'first',
'message_number':','.join,
'an_robot':','.join})
.reset_index(drop=True))
print (used_idx)
最后根据False
个值创建新的DataFrame并加入输出:
print (out)
event_timestamp message_number an_robot
0 2015-04-15 12:09:41 10053,10156_ad,20205 robot_4,robot_7,robot_108
1 2015-04-15 12:09:51 10010,10012 robot_38,robot_65
2 2015-04-15 12:09:59 10011,87954 robot_39,robot_2
c = ['event_timestamp','message_number','an_robot']
df2 = pd.concat([out, df[~df.index.isin(used_idx)]]).sort_values('event_timestamp')
print(df2)
event_timestamp message_number an_robot
0 2015-04-15 12:09:39 10125 robot_7
0 2015-04-15 12:09:41 10053,10156_ad,20205 robot_4,robot_7,robot_108
4 2015-04-15 12:09:48 45689 robot_23
1 2015-04-15 12:09:51 10010,10012 robot_38,robot_65
7 2015-04-15 12:09:58 98765 robot_99
2 2015-04-15 12:09:59 10011,87954 robot_39,robot_2