熊猫按行中的值和其他列中的值在行之间进行区分

时间:2018-11-09 14:30:32

标签: python pandas performance loops key

我有一个与员工签订合同的历史数据框。 员工可能会多次出现在记录中。 目标文件代表3种类型。 目的是计算特定员工在公司工作的时间。 我找到了解决方案。但是代码的执行时间将近2个小时。 有没有更快捷,更方便的方法来做到这一点?

原始表大约有200000多行

以下是其结构的示例:

import pandas as pd

df = pd.DataFrame({
                    'name': ['John Johnson', 'John Johnson', 'John Johnson', 'John Johnson', 'Tom Thompson', 'Tom Thompson',
                            'Steve Stevens', 'Steve Stevens', 'Steve Stevens', 'Steve Stevens', 'Steve Stevens', 
                            'Tom Thompson', 'Tom Thompson', 'Tom Thompson', 'Tom Thompson'], 
                   'doc_type': ['opening_document','any_other_document','any_other_document','closing_document2','opening_document','any_other_document',
                                'opening_document','any_other_document','closing_document1','opening_document','closing_document2',
                               'any_other_document','closing_document1','any_other_document','opening_document'], 
                   'date': pd.to_datetime(['2017-1-1', '2017-1-2', '2017-1-10', '2017-1-15', '2017-1-16', '2017-1-17',
                                '2018-1-2', '2018-1-10', '2018-1-15', '2018-1-16', '2018-1-30',
                                '2017-2-1', '2017-2-4', '2017-3-10', '2017-5-15'])
                  })

# sort by date
df = df.sort_values(by='date').reset_index(drop=True)

输出:

+----+---------------+--------------------+---------------------+
|    |     name      |      doc_type      |        date         |
|----+---------------+--------------------+---------------------|
|  0 | John Johnson  |  opening_document  | 2017-01-01 00:00:00 |
|  1 | John Johnson  | any_other_document | 2017-01-02 00:00:00 |
|  2 | John Johnson  | any_other_document | 2017-01-10 00:00:00 |
|  3 | John Johnson  | closing_document2  | 2017-01-15 00:00:00 |
|  4 | Tom Thompson  |  opening_document  | 2017-01-16 00:00:00 |
|  5 | Tom Thompson  | any_other_document | 2017-01-17 00:00:00 |
|  6 | Tom Thompson  | any_other_document | 2017-02-01 00:00:00 |
|  7 | Tom Thompson  | closing_document1  | 2017-02-04 00:00:00 |
|  8 | Tom Thompson  | any_other_document | 2017-03-10 00:00:00 |
|  9 | Tom Thompson  |  opening_document  | 2017-05-15 00:00:00 |
| 10 | Steve Stevens |  opening_document  | 2018-01-02 00:00:00 |
| 11 | Steve Stevens | any_other_document | 2018-01-10 00:00:00 |
| 12 | Steve Stevens | closing_document1  | 2018-01-15 00:00:00 |
| 13 | Steve Stevens |  opening_document  | 2018-01-16 00:00:00 |
| 14 | Steve Stevens | closing_document2  | 2018-01-30 00:00:00 |
+----+---------------+--------------------+---------------------+

我需要计算 opening_document 和( closing_document1 closing_document2 )之间的时间差 所有文档(不仅是目标文档)都代表行

我的脚本输出正确:

%%time

# since name is not enough for correct JOIN we need to make a new unique key
# logic is based on information according to which before closing doc_type there always opening type (because you cant lay off who you not hired yet)

df['key'] = np.nan                   # create new empty column

count_key = 0                        # key counter
df['key'][count_key] = count_key     # assign key 0 for row 0 

for i in range(1, len(df)):          # start with row 1
    store = df['doc_type'][i] 
    if store != 'opening_document':
        df['key'][i] = count_key     # if row is NOT 'opening_document' then keep key the same
    else:
        count_key += 1               # else change key
        df['key'][i] = count_key     # and assing it for current row

  # just statusbar for make sure that something happening
    sys.stdout.write('\r')             
    sys.stdout.write("[%-20s] %d%%" % ('='*round(20*(i/(len(df)-1))), (100/(len(df)-1))*i))
    sys.stdout.flush()

print('\n')

在原始数据框中时长:1h 29min 53s

它为我们提供了一个额外的密钥,您可以通过该密钥明确确定如何加入

+----+---------------+--------------------+---------------------+-------+
|    |     name      |      doc_type      |        date         |   key |
|----+---------------+--------------------+---------------------+-------|
|  0 | John Johnson  |  opening_document  | 2017-01-01 00:00:00 |     0 |
|  1 | John Johnson  | any_other_document | 2017-01-02 00:00:00 |     0 |
|  2 | John Johnson  | any_other_document | 2017-01-10 00:00:00 |     0 |
|  3 | John Johnson  | closing_document2  | 2017-01-15 00:00:00 |     0 |
|  4 | Tom Thompson  |  opening_document  | 2017-01-16 00:00:00 |     1 |
|  5 | Tom Thompson  | any_other_document | 2017-01-17 00:00:00 |     1 |
|  6 | Tom Thompson  | any_other_document | 2017-02-01 00:00:00 |     1 |
|  7 | Tom Thompson  | closing_document1  | 2017-02-04 00:00:00 |     1 |
|  8 | Tom Thompson  | any_other_document | 2017-03-10 00:00:00 |     1 |
|  9 | Tom Thompson  |  opening_document  | 2017-05-15 00:00:00 |     2 |
| 10 | Steve Stevens |  opening_document  | 2018-01-02 00:00:00 |     3 |
| 11 | Steve Stevens | any_other_document | 2018-01-10 00:00:00 |     3 |
| 12 | Steve Stevens | closing_document1  | 2018-01-15 00:00:00 |     3 |
| 13 | Steve Stevens |  opening_document  | 2018-01-16 00:00:00 |     4 |
| 14 | Steve Stevens | closing_document2  | 2018-01-30 00:00:00 |     4 |
+----+---------------+--------------------+---------------------+-------+

合并以按名称和新键将行“转换”为cols,然后计算以天为单位的开盘价和收盘价之间的差异

df_merged = pd.merge(df.loc[df['doc_type']=='opening_document'],
                     df.loc[df['doc_type'].isin(['closing_document1','closing_document2'])], 
                     on=['name','key'], 
                     how='left')

df_merged['time_diff'] = df_merged['date_y'] - df_merged['date_x']

最终正确输出:

    name           doc_type_x        date_x                 key  doc_type_y         date_y               time_diff
--  -------------  ----------------  -------------------  -----  -----------------  -------------------  ----------------
 0  John Johnson   opening_document  2017-01-01 00:00:00      0  closing_document2  2017-01-15 00:00:00  14 days 00:00:00
 1  Tom Thompson   opening_document  2017-01-16 00:00:00      1  closing_document1  2017-02-04 00:00:00  19 days 00:00:00
 2  Tom Thompson   opening_document  2017-05-15 00:00:00      2  nan                NaT                  NaT
 3  Steve Stevens  opening_document  2018-01-02 00:00:00      3  closing_document1  2018-01-15 00:00:00  13 days 00:00:00
 4  Steve Stevens  opening_document  2018-01-16 00:00:00      4  closing_document2  2018-01-30 00:00:00  14 days 00:00:00

我发现不使用循环的最佳解决方案是diff()方法 但是事实证明,我们不知道我们减去哪个“块”

执行此操作而不是循环:

df1 = df.loc[df['doc_type'].isin(['opening_document','closing_document1','closing_document2'])].sort_values(by='date').reset_index(drop=True)
df1['diff'] = df1['date'].diff(-1)*(-1)
df1 = df1[df1['doc_type']=='opening_document'].reset_index(drop=True)

输出:

+----+---------------+------------------+---------------------+-------------------+
|    |     name      |     doc_type     |        date         |       diff        |
|----+---------------+------------------+---------------------+-------------------|
|  0 | John Johnson  | opening_document | 2017-01-01 00:00:00 | 14 days 00:00:00  |
|  1 | Tom Thompson  | opening_document | 2017-01-16 00:00:00 | 19 days 00:00:00  |
|  2 | Tom Thompson  | opening_document | 2017-05-15 00:00:00 | 232 days 00:00:00 |
|  3 | Steve Stevens | opening_document | 2018-01-02 00:00:00 | 13 days 00:00:00  |
|  4 | Steve Stevens | opening_document | 2018-01-16 00:00:00 | 14 days 00:00:00  |
+----+---------------+------------------+---------------------+-------------------+

索引为2的行中的值错误。没有任何结帐文件。

如何提高性能并保存正确的输出?

1 个答案:

答案 0 :(得分:2)

要提高循环for的性能,您可以使用shift列上的'name'来查找更改的地方或{{1}的地方}位于“ doc_type”中,然后使用cumsum来增加值,例如:

'opening_document'

然后像您一样使用df['key'] = ((df.name != df.name.shift())|(df.doc_type == 'opening_document')).cumsum() 可能足够有效。如果您希望密钥从0开始,只需在上面的代码末尾添加merge

编辑:每次更改名称时,“ doc_type”中的值为-1,则可能仅保留第二个条件,例如:

opening_document