熊猫:从受限列范围内的每一行中获取随机子集的有效方法

时间:2019-05-14 10:11:44

标签: python pandas subset

我在宽大的熊猫数据框中存储了一些长度可变的数字时间序列。每行对应一个系列,每列对应一个测量时间点。由于序列长度不同,这些序列的尾(左)(右)或尾(右)或左(右)可能缺少值(NA)。总是有连续的条纹,每行都没有最小长度的NA。

我需要从这些行的每行中获取固定长度的随机子集,而不包含任何NA。理想情况下,我希望保持原始数据帧完整,并报告新的子集。

我设法以非常低效的for循环(逐行遍历每一行)来确定此输出,确定作物位置的起点,这样NA不会包含在输出中,并复制作物的结果。这可以工作,但是在大型数据集上非常慢。这是代码:

import pandas as pd
import numpy as np
from copy import copy

def crop_random(df_in, output_length, ignore_na_tails=True):
    # Initialize new dataframe
    colnames = ['X_' + str(i) for i in range(output_length)]
    df_crop = pd.DataFrame(index=df_in.index, columns=colnames)
    # Go through all rows
    for irow in range(df_in.shape[0]):
        series = copy(df_in.iloc[irow, :])
        series = np.array(series).astype('float')
        length = len(series)
        if ignore_na_tails:
            pos_non_na = np.where(~np.isnan(series))
            # Range where the subset might start
            lo = pos_non_na[0][0]
            hi = pos_non_na[0][-1]
            left = np.random.randint(lo, hi - output_length + 2)  
        else:
            left = np.random.randint(0, length - output_length)
        series = series[left : left + output_length]
        df_crop.iloc[irow, :] = series
    return df_crop

还有一个玩具示例:

df = pd.DataFrame.from_dict({'t0': [np.NaN, 1, np.NaN],
                             't1': [np.NaN, 2, np.NaN],
                             't2': [np.NaN, 3, np.NaN],
                             't3': [1, 4, 1],
                             't4': [2, 5, 2],
                             't5': [3, 6, 3],
                             't6': [4, 7, np.NaN],
                             't7': [5, 8, np.NaN],
                             't8': [6, 9, np.NaN]})
#     t0   t1   t2  t3  t4  t5   t6   t7   t8
# 0  NaN  NaN  NaN   1   2   3    4    5    6
# 1    1    2    3   4   5   6    7    8    9
# 2  NaN  NaN  NaN   1   2   3  NaN  NaN  NaN

crop_random(df, 3)
# One possible output:
#    X_0  X_1  X_2
# 0    2    3    4
# 1    7    8    9
# 2    1    2    3

如何以适合大型数据帧的方式获得相同的结果?

编辑:将改进的解决方案移至答案部分。

1 个答案:

答案 0 :(得分:1)

我设法通过以下方法大大加快了速度:

def crop_random(dataset, output_length, ignore_na_tails=True):
    # Get a random range to crop for each row
    def get_range_crop(series, output_length, ignore_na_tails):
        series = np.array(series).astype('float')
        if ignore_na_tails:
            pos_non_na = np.where(~np.isnan(series))
            start = pos_non_na[0][0]
            end = pos_non_na[0][-1]
            left = np.random.randint(start,
                                     end - output_length + 2)  # +1 to include last in randint; +1 for slction span
        else:
            length = len(series)
            left = np.random.randint(0, length - output_length)
        right = left + output_length
        return left, right

    # Crop the rows to random range, reset_index to do concat without recreating new columns
    range_subset = dataset.apply(get_range_crop, args=(output_length,ignore_na_tails, ), axis = 1)
    new_rows = [dataset.iloc[irow, range_subset[irow][0]: range_subset[irow][1]]
                for irow in range(dataset.shape[0])]
    for row in new_rows:
        row.reset_index(drop=True, inplace=True)

    # Concatenate all rows
    dataset_cropped = pd.concat(new_rows, axis=1).T

    return dataset_cropped