熊猫:在事件发生时填补缺失值

时间:2018-08-31 10:56:39

标签: pandas function pandas-groupby missing-data

我已经问过类似的问题(see here),但不幸的是,这个问题还不够清楚,因此我决定最好创建一个具有更好数据集的新示例,并对所需输出进行新的解释-编辑本来是真正的重大变化。 因此,我有以下数据集(它已经按日期和播放器排序了):

image_batch = image_batch.reshape(64, 28, 28, 1)

所以,这是我的三列:

  1. “玩家”-dtype = object
  2. “会话”(对象)。每个会话ID将玩家在线执行的一组操作(即数据集中的行)分组在一起。
  3. 'date'(日期时间对象)告诉我们实施每个动作的时间。 此数据集中的问题是我为每个操作都添加了时间戳,但是某些操作缺少其会话ID。我要执行的操作如下:对于每个玩家,我想根据时间轴为缺少的值提供一个ID标签。如果缺少动作ID的动作属于某个会话的时间范围(第一个动作-最后一个动作),则可以对其进行标记。

好,所以我这里缺少我的值:

d = {'player': ['1', '1', '1', '1', '1', '1', '1', '1', '1', '1', '1', '2', '2', '2', '2', '2', '3', '3', '3', '3', '3', '3'],
'date': ['2018-01-01 00:17:01', '2018-01-01 00:17:05','2018-01-01 00:19:05', '2018-01-01 00:21:07', '2018-01-01 00:22:09', 
         '2018-01-01 00:22:17', '2018-01-01 00:25:09', '2018-01-01 00:25:11', '2018-01-01 00:27:28', '2018-01-01 00:29:29',
          '2018-01-01 00:30:35',  '2018-02-01 00:31:16', '2018-02-01 00:35:22', '2018-02-01 00:38:16', 
         '2018-02-01 00:38:20', '2018-02-01 00:55:15', '2018-01-03 00:55:22', 
         '2018-01-03 00:58:16', '2018-01-03 00:58:21', '2018-03-01 01:00:35', '2018-03-01 01:20:16', '2018-03-01 01:31:16'], 
'id': [np.nan, np.nan, 'a', 'a', 'b', np.nan, 'b', 'c', 'c', 'c', 'c', 'd', 'd', 'e', 'e', np.nan, 'f', 'f', 
       'g', np.nan, 'f', 'g']}

#create dataframe
df = pd.DataFrame(data=d)
#change date to datetime
df['date'] =  pd.to_datetime(df['date']) 
df

   player      date         id
0   1   2018-01-01 00:17:01 NaN
1   1   2018-01-01 00:17:05 NaN
2   1   2018-01-01 00:19:05 a
3   1   2018-01-01 00:21:07 a
4   1   2018-01-01 00:22:09 b
5   1   2018-01-01 00:22:07 NaN
6   1   2018-01-01 00:25:09 b
7   1   2018-01-01 00:25:11 c
8   1   2018-01-01 00:27:28 c
9   1   2018-01-01 00:29:29 c
10  1   2018-01-01 00:30:35 c
11  2   2018-02-01 00:31:16 d
12  2   2018-02-01 00:35:22 d
13  2   2018-02-01 00:38:16 e
14  2   2018-02-01 00:38:20 e
15  2   2018-02-01 00:55:15 NaN
16  3   2018-01-03 00:55:22 f
17  3   2018-01-03 00:58:16 f
18  3   2018-01-03 00:58:21 g
19  3   2018-03-01 01:00:35 NaN
20  3   2018-03-01 01:20:16 f
21  3   2018-03-01 01:31:16 g

请注意,我有他们每个人的播放器代码:我错过的只是会话代码。因此,我想将每个缺失值的时间戳与相应播放器的会话代码时间戳进行比较。 我当时正在考虑以分组方式计算每个会话,每个玩家的第一个和最后一个动作(但我不知道这是否是最好的方法)。

df.loc[df.id.isnull(),'date']
0     2018-01-01 00:17:01
1     2018-01-01 00:17:05
5     2018-01-01 00:22:07
15    2018-02-01 00:55:15
19    2018-03-01 01:00:35

然后我想通过玩家ID来匹配Nan,然后将每个缺失值的时间戳与该玩家每次会话的范围进行比较。

在数据集中,我尝试说明我感兴趣的三种可能的情况:

  1. 该操作发生在某个会话的第一个日期和最后一个日期之间。在这种情况下,我想用该会话的ID填充缺失的值,因为它显然属于该会话。因此,数据集的第5行应标记为“ b”,因为它出现在b范围内。
  2. 我会将操作发生在任何会话范围之外的会话标记为“ 0”,例如前两个Nan和第15行。
  3. 最后,如果无法将操作与单个会话相关联,则将其标记为“ -99”,因为该操作发生在不同会话的时间范围内。第19行(最后一个Nan)就是这种情况。

所需的输出: 总结起来,结果应该像这样的df:

my_agg = df.groupby(['player', 'id']).date.agg([min, max])
my_agg
                  min                      max
player  id      
1       a   2018-01-01 00:19:05   2018-01-01 00:21:07
        b   2018-01-01 00:22:09   2018-01-01 00:25:09
        c   2018-01-01 00:25:11   2018-01-01 00:30:35
2       d   2018-02-01 00:31:16   2018-02-01 00:35:22
        e   2018-02-01 00:38:16   2018-02-01 00:38:20
3       f   2018-01-03 00:55:22   2018-03-01 01:20:16
        g   2018-01-03 00:58:21   2018-03-01 01:31:16

2 个答案:

答案 0 :(得分:0)

可能不是最好的方法,但确实有效。基本上,我使用shift创建一些新列,然后使用您在curl中提到的条件:

np.select

退出:

 df['shift'] = df['id'].shift(1)
df['shift-1'] = df['id'].shift(-1)
df['merge'] = df[['shift','shift-1']].values.tolist()
df.drop(columns=['shift','shift-1'], inplace=True)

alpha = {np.nan:0,'a':1,'b':2,'c':3,'d':4,'e':5,'f':6,'g':7,'h':8}
diff = []
for i in range(len(df)):
    diff.append(alpha[df['merge'][i][1]] - alpha[df['merge'][i][0]])

df['diff'] = diff

conditions = [(df['id'].shift(1).eq(df['id'].shift(-1)) & (df['id'].isna()) & (df['player'].shift(1).eq(df['player'].shift(-1)))),

              (~df['id'].shift(1).eq(df['id'].shift(-1)) & (df['id'].isna()) & (df['player'].shift(1).eq(df['player']) | 
                                                                                df['player'].shift(-1).eq(df['player'])) &
              (~df['diff'] < 0)),

              (~df['id'].shift(1).eq(df['id'].shift(-1)) & (df['id'].isna()) & (df['player'].shift(1).eq(df['player']) | 
                                                                                df['player'].shift(-1).eq(df['player'])) &
              (df['diff'] < 0)),


             ]
choices = [df['id'].ffill(),
           0,
           -99
          ]
df['id'] = np.select(conditions, choices, default = df['id'])
df.drop(columns=['merge','diff'], inplace=True)
df

答案 1 :(得分:0)

在我的解决方案中,我只需要花点时间才能正确应用@ysearka在上一个stackoverflow问题-see here中编写的函数。基本的挑战是逐个应用其功能播放器。

#define a function to sort the missing values (ysearka function from stackoverflow)
def my_custom_function(time):
    #compare every date event with the range of the sessions. 
    current_sessions = my_agg.loc[(my_agg['min']<time) & (my_agg['max']>time)]
    #store length, that is the number of matches. 
    count = len(current_sessions)
    #How many matches are there for any missing id value?
    # if 0 it means that no matches are found: the event lies outside all the possible ranges
    if count == 0:
        return 0
    #if more than one, it is impossible to say to which session the event belongs
    if count > 1:
        return -99
    #equivalent to if count == 1 return: in this case the event belongs clearly to just one session
    return current_sessions.index[0][1]


#create a list storing all the player ids
plist = list(df.player.unique())

#ignore settingcopywarning: https://stackoverflow.com/questions/20625582/how-to-deal-with-settingwithcopywarning-in-pandas
pd.options.mode.chained_assignment = None

# create an empty new dataframe, where to store the results
final = pd.DataFrame()
#with this for loop iterate over the part of the dataset corresponding to one player at a time
for i in plist:
    #slice the dataset by player
    players = df.loc[df['player'] == i]
    #for every player, take the dates where we are missing the id
    mv_per_player = players.loc[players.id.isnull(),'date']
    #for each player, groupby player id, and compute the first and last event
    my_agg = players.groupby(['player', 'id']).date.agg([min, max])
    #apply the function to each chunk of the dataset. You obtain a series, with all the imputed values for the Nan
    ema = mv_per_player.apply(my_custom_function)    
    #now we can sobstitute the missing id with the new imputed values...
    players.loc[players.id.isnull(),'id'] = ema.values    
    #append new values stored in players to the new dataframe
    final = final.append(players)

#...and check the new dataset
final

player  date    id
0   1   2018-01-01 00:17:01 0
1   1   2018-01-01 00:17:05 0
2   1   2018-01-01 00:19:05 a
3   1   2018-01-01 00:21:07 a
4   1   2018-01-01 00:22:09 b
5   1   2018-01-01 00:22:17 b
6   1   2018-01-01 00:25:09 b
7   1   2018-01-01 00:25:11 c
8   1   2018-01-01 00:27:28 c
9   1   2018-01-01 00:29:29 c
10  1   2018-01-01 00:30:35 c
11  2   2018-02-01 00:31:16 d
12  2   2018-02-01 00:35:22 d
13  2   2018-02-01 00:38:16 e
14  2   2018-02-01 00:38:20 e
15  2   2018-02-01 00:55:15 0
16  3   2018-01-03 00:55:22 f
17  3   2018-01-03 00:58:16 f
18  3   2018-01-03 00:58:21 g
19  3   2018-03-01 01:00:35 -99
20  3   2018-03-01 01:20:16 f
21  3   2018-03-01 01:31:16 g

我认为我的解决方案不是最好的,仍然会欣赏其他想法,特别是如果它们更容易扩展(我有一个大数据集)。