采用人口平衡的分层随机抽样

时间:2017-11-30 14:00:27

标签: python pandas scikit-learn statistics

考虑具有偏差类分布的人口,如

     ErrorType   Samples
        1          XXXXXXXXXXXXXXX
        2          XXXXXXXX
        3          XX
        4          XXX
        5          XXXXXXXXXXXX

我想随机抽取20个中的20个,而不对任何参与较少的课程进行抽样。例如,在上面的例子中,我想采样如下

     ErrorType   Samples
        1          XXXXX|XXXXXXXXXX
        2          XXXXX|XXX
        3          XX***|
        4          XXX**|
        5          XXXXX|XXXXXXX

即。类型-1中的5和-2和-3,类型-3中的2和类型-4中的3

  1. 这保证我的样本大小接近我的目标,即20个样本
  2. 没有类参与esp class -3和-4。
  3. 我最终编写了一个无限的代码,但我相信可以更方便地使用pandas方法或一些sklearn函数。

     sample_size = 20 # Just for the example
     # Determine the average participaction per error types
     avg_items = sample_size / len(df.ErrorType.unique())
     value_counts = df.ErrorType.value_counts()
     less_than_avg = value_counts[value_counts < avg_items]
     offset = avg_items * len(value_counts[value_counts < avg_items]) - sum(less_than_avg)
     offset_per_item = offset / (len(value_counts) - len(less_than_avg))
     adj_avg = int(non_act_count / len(value_counts) + offset_per_item)
     df = df.groupby(['ErrorType'],
                     group_keys=False).apply(lambda g: g.sample(min(adj_avg, len(g)))))
    

3 个答案:

答案 0 :(得分:2)

您可以使用辅助列来查找长度超过样本大小的样本,并使用pd.Series.sample

示例:

df = pd.DataFrame({'ErrorType':[1,2,3,4,5],
               'Samples':[np.arange(100),np.arange(10),np.arange(3),np.arange(2),np.arange(100)]})

df['new'] =df['Samples'].str.len().where(df['Samples'].str.len()<5,5)
# this is let us know how many samples can be extracted per row
#0    5
#1    5
#2    3
#3    2
#4    5
Name: new, dtype: int64
# Sampling based on newly obtained column i.e 
df.apply(lambda x : pd.Series(x['Samples']).sample(x['new']).tolist(),1)

0    [52, 81, 43, 60, 46]
1         [8, 7, 0, 9, 1]
2               [2, 1, 0]
3                  [1, 0]
4    [29, 24, 16, 15, 69]
Name: sample2, dtype: object

我写了一个函数,用thresh返回样本大小,即

def get_thres_arr(sample_size,sample_length): 
    thresh = sample_length.min()
    size = np.array([thresh]*len(sample_length))
    sum_of_size = sum(size)
    while sum_of_size< sample_size:
        # If the lenght is more than threshold then increase the thresh by 1 i.e  
        size = np.where(sample_length>thresh,thresh+1,sample_length)
        sum_of_size = sum(size)
        #increment threshold
        thresh+=1
    return size

df = pd.DataFrame({'ErrorType':[1,2,3,4,5,1,7,9,4,5],
                   'Samples':[np.arange(100),np.arange(10),np.arange(3),np.arange(2),np.arange(100),np.arange(100),np.arange(10),np.arange(3),np.arange(2),np.arange(100)]})
ndf = pd.DataFrame({'ErrorType':[1,2,3,4,5,6],
                   'Samples':[np.arange(100),np.arange(10),np.arange(3),np.arange(1),np.arange(2),np.arange(100)]})


get_thres_arr(20,ndf['Samples'].str.len())
#array([5, 5, 3, 1, 2, 5])

get_thres_arr(20,df['Samples'].str.len())
#array([2, 2, 2, 2, 2, 2, 2, 2, 2, 2])

现在您可以使用以下尺寸:

df['new'] = get_thres_arr(20,df['Samples'].str.len())
df.apply(lambda x : pd.Series(x['Samples']).sample(x['new']).tolist(),1)

0    [64, 89]
1      [4, 0]
2      [0, 1]
3      [1, 0]
4    [41, 80]
5    [25, 84]
6      [4, 0]
7      [2, 0]
8      [1, 0]
9     [34, 1]

希望它有所帮助。

答案 1 :(得分:1)

哇。得到了书呆子这个人。我写了一个函数,可以在numpy中做你想要的,没有任何神奇的数字....它不漂亮,但我不能浪费所有的时间写一些东西而不是把它作为答案发布。现在有两个输出n_for_each_labelrandom_idxs,它们分别是每个类和随机选择的数据的选择数。当你有n_for_each_label时,我想不出为什么你会想要random_idxs

修改 据我所知,在scikit中没有这样做的功能,这不是一个非常常见的方法来为ML填补你的数据,所以我怀疑有什么。

# This is your input, sample size and your labels
sample_size = 20
# in your case you'd just want y = df.ErrorType
y = np.hstack((np.ones(15), np.ones(8)*2,
               np.ones(2)*3, np.ones(3)*4,
               np.ones(12)*5))
y = y.astype(int)
# y = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2,
 #     3, 3, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5]

# Below is the function
unique_labels = np.unique(y)
bin_c = np.bincount(y)[unique_labels]
label_mat = np.ones((bin_c.shape[0], bin_c.max()), dtype=int)*-1
for i in range(unique_labels.shape[0]):
    label_loc = np.where(y == unique_labels[i])[0]
    np.random.shuffle(label_loc)
    label_mat[i, :label_loc.shape[0]] = label_loc
random_size = 0
i = 1
while random_size < sample_size:
    i += 1
    random_size = np.sum(label_mat[:, :i] != -1)

if random_size == sample_size:
    random_idxs = label_mat[:, :i]
    n_for_each_label = np.sum(random_idxs != -1, axis=1)
    random_idxs = random_idxs[random_idxs != -1]
else:
    random_idxs = label_mat[:, :i]
    last_idx = np.where(random_idxs[:, -1] != -1)[0]
    n_drop = random_size - sample_size
    drop_idx = np.random.choice(last_idx, n_drop)
    random_idxs[drop_idx, -1] = -1
    n_for_each_label = np.sum(random_idxs != -1, axis=1)
    random_idxs = random_idxs[random_idxs != -1]

输出继电器:

n_for_each_label = array([5,5,2,3,5])

要从每个错误类型中抽取的数字,或者如果要跳到最后:

random_idxs = array([3,11,8,13,9,22,15,17,20,18,23,24,25,26,27,36,32,        38,35,33])

答案 2 :(得分:0)

没有神奇的数字。只需从整个人群中抽样,以明显的方式编码。

第一步是更换每个&#39; X&#39;使用它出现的层的数字代码。如此编码,整个群体存储在一个字符串中,称为entire_population

>>> strata = {}
>>> with open('skewed.txt') as skewed:
...     _ = next(skewed)
...     for line in skewed:
...         error_type, samples = line.rstrip().split()
...         strata[error_type] = samples
... 
>>> whole = []
>>> for _ in strata:
...     strata[_] = strata[_].replace('X', _)
...     _, strata[_]
...     whole.append(strata[_])
...     
('3', '33')
('2', '22222222')
('1', '111111111111111')
('5', '555555555555')
('4', '444')
>>> entire_population = ''.join(whole)

考虑到sample_size必须为20的约束,从整个群体中随机抽样以形成完整的样本。

>>> sample = []
>>> sample_size = 20
>>> from random import choice
>>> for s in range(sample_size):
...     sample.append(choice(entire_population))
...     
>>> sample
['2', '5', '1', '5', '1', '1', '1', '3', '5', '5', '5', '1', '5', '2', '5', '1', '2', '2', '2', '5']

最后,通过计算样本中每个层的代表来将样本描述为抽样设计。

>>> from collections import Counter
>>> Counter(sample)
Counter({'5': 8, '1': 6, '2': 5, '3': 1})