你知道是否有办法让python的random.sample
与生成器对象一起工作。我试图从一个非常大的文本语料库中获取随机样本。问题是random.sample()
引发了以下错误。
TypeError: object of type 'generator' has no len()
我在想,也许有一些方法可以使用itertools
中的某些内容进行此操作,但无法通过一些搜索找到任何内容。
一个有点构成的例子:
import random
def list_item(ls):
for item in ls:
yield item
random.sample( list_item(range(100)), 20 )
更新
根据MartinPieters
的要求,我对当前提出的三种方法做了一些时间安排。结果如下。
Sampling 1000 from 10000
Using iterSample 0.0163 s
Using sample_from_iterable 0.0098 s
Using iter_sample_fast 0.0148 s
Sampling 10000 from 100000
Using iterSample 0.1786 s
Using sample_from_iterable 0.1320 s
Using iter_sample_fast 0.1576 s
Sampling 100000 from 1000000
Using iterSample 3.2740 s
Using sample_from_iterable 1.9860 s
Using iter_sample_fast 1.4586 s
Sampling 200000 from 1000000
Using iterSample 7.6115 s
Using sample_from_iterable 3.0663 s
Using iter_sample_fast 1.4101 s
Sampling 500000 from 1000000
Using iterSample 39.2595 s
Using sample_from_iterable 4.9994 s
Using iter_sample_fast 1.2178 s
Sampling 2000000 from 5000000
Using iterSample 798.8016 s
Using sample_from_iterable 28.6618 s
Using iter_sample_fast 6.6482 s
事实证明,array.insert
在大样本量方面存在严重缺陷。我用来计算方法的代码
from heapq import nlargest
import random
import timeit
def iterSample(iterable, samplesize):
results = []
for i, v in enumerate(iterable):
r = random.randint(0, i)
if r < samplesize:
if i < samplesize:
results.insert(r, v) # add first samplesize items in random order
else:
results[r] = v # at a decreasing rate, replace random items
if len(results) < samplesize:
raise ValueError("Sample larger than population.")
return results
def sample_from_iterable(iterable, samplesize):
return (x for _, x in nlargest(samplesize, ((random.random(), x) for x in iterable)))
def iter_sample_fast(iterable, samplesize):
results = []
iterator = iter(iterable)
# Fill in the first samplesize elements:
for _ in xrange(samplesize):
results.append(iterator.next())
random.shuffle(results) # Randomize their positions
for i, v in enumerate(iterator, samplesize):
r = random.randint(0, i)
if r < samplesize:
results[r] = v # at a decreasing rate, replace random items
if len(results) < samplesize:
raise ValueError("Sample larger than population.")
return results
if __name__ == '__main__':
pop_sizes = [int(10e+3),int(10e+4),int(10e+5),int(10e+5),int(10e+5),int(10e+5)*5]
k_sizes = [int(10e+2),int(10e+3),int(10e+4),int(10e+4)*2,int(10e+4)*5,int(10e+5)*2]
for pop_size, k_size in zip(pop_sizes, k_sizes):
pop = xrange(pop_size)
k = k_size
t1 = timeit.Timer(stmt='iterSample(pop, %i)'%(k_size), setup='from __main__ import iterSample,pop')
t2 = timeit.Timer(stmt='sample_from_iterable(pop, %i)'%(k_size), setup='from __main__ import sample_from_iterable,pop')
t3 = timeit.Timer(stmt='iter_sample_fast(pop, %i)'%(k_size), setup='from __main__ import iter_sample_fast,pop')
print 'Sampling', k, 'from', pop_size
print 'Using iterSample', '%1.4f s'%(t1.timeit(number=100) / 100.0)
print 'Using sample_from_iterable', '%1.4f s'%(t2.timeit(number=100) / 100.0)
print 'Using iter_sample_fast', '%1.4f s'%(t3.timeit(number=100) / 100.0)
print ''
我还进行了测试,检查所有方法确实采用了无偏的发生器样本。因此,对于所有方法,我对来自1000
10000
次的100000
元素进行了抽样,并计算了人口中每个项目的平均出现频率,结果为~.1
期待所有三种方法。
答案 0 :(得分:19)
虽然Martijn Pieters的答案是正确的,但当samplesize
变大时确实会慢下来,因为在循环中使用list.insert
可能会有二次复杂性。
在我看来,这是一种替代方案,可以在提高性能的同时保持一致性:
def iter_sample_fast(iterable, samplesize):
results = []
iterator = iter(iterable)
# Fill in the first samplesize elements:
try:
for _ in xrange(samplesize):
results.append(iterator.next())
except StopIteration:
raise ValueError("Sample larger than population.")
random.shuffle(results) # Randomize their positions
for i, v in enumerate(iterator, samplesize):
r = random.randint(0, i)
if r < samplesize:
results[r] = v # at a decreasing rate, replace random items
return results
samplesize
以上的10000
值开始显示差异。使用(1000000, 100000)
致电的时间:
答案 1 :(得分:17)
你做不到。
您有两种选择:将整个生成器读入列表,然后从该列表中进行采样,或者使用逐个读取生成器的方法并从中选择样本:
import random
def iterSample(iterable, samplesize):
results = []
for i, v in enumerate(iterable):
r = random.randint(0, i)
if r < samplesize:
if i < samplesize:
results.insert(r, v) # add first samplesize items in random order
else:
results[r] = v # at a decreasing rate, replace random items
if len(results) < samplesize:
raise ValueError("Sample larger than population.")
return results
此方法根据到目前为止的可迭代中的项目数调整下一项是样本的一部分的可能性。它不需要在内存中保存超过samplesize
个项目。
解决方案不是我的;它是another answer here on SO的一部分提供的。
答案 2 :(得分:7)
只是为了它,这是一个单行程,它可以对 k 元素进行采样而无需替换O( n 中生成的 n 项> lg k )时间:
from heapq import nlargest
def sample_from_iterable(it, k):
return (x for _, x in nlargest(k, ((random.random(), x) for x in it)))
答案 3 :(得分:2)
我正试图从一个很大的文本语料库中获取一个随机样本。
Your excellent synthesis answer当前显示iter_sample_fast(gen, pop)
的胜利。但是,我尝试了Katriel推荐的random.sample(list(gen), pop)
-与之相比,它的速度非常快!
def iter_sample_easy(iterable, samplesize):
return random.sample(list(iterable), samplesize)
Sampling 1000 from 10000
Using iter_sample_fast 0.0192 s
Using iter_sample_easy 0.0009 s
Sampling 10000 from 100000
Using iter_sample_fast 0.1807 s
Using iter_sample_easy 0.0103 s
Sampling 100000 from 1000000
Using iter_sample_fast 1.8192 s
Using iter_sample_easy 0.2268 s
Sampling 200000 from 1000000
Using iter_sample_fast 1.7467 s
Using iter_sample_easy 0.3297 s
Sampling 500000 from 1000000
Using iter_sample_easy 0.5628 s
Sampling 2000000 from 5000000
Using iter_sample_easy 2.7147 s
现在,随着您的语料库变得非常大,将整个可迭代对象具体化为list
将占用大量内存。但是,如果我们可以解决问题,我们仍然可以利用Python的超快性:基本上,我们选择一个{合理的CHUNKSIZE
,然后random.sample
大小的块,然后再次使用random.sample
将它们合并在一起。我们只需要正确设置边界条件即可。
如果list(iterable)
的长度是CHUNKSIZE
的精确倍数并且不大于samplesize*CHUNKSIZE
,我会怎么做:
def iter_sample_dist_naive(iterable, samplesize):
CHUNKSIZE = 10000
samples = []
it = iter(iterable)
try:
while True:
first = next(it)
chunk = itertools.chain([first], itertools.islice(it, CHUNKSIZE-1))
samples += iter_sample_easy(chunk, samplesize)
except StopIteration:
return random.sample(samples, samplesize)
但是,上面的代码在len(list(iterable)) % CHUNKSIZE != 0
时产生不均匀的采样,并且随着len(list(iterable)) * samplesize / CHUNKSIZE
变得“很大”而耗尽了内存。恐怕无法解决这些错误,但this blog post中描述了一种解决方案,对我来说听起来很合理。 (搜索字词:“分布式随机抽样”,“分布式水库抽样”。)
Sampling 1000 from 10000
Using iter_sample_fast 0.0182 s
Using iter_sample_dist_naive 0.0017 s
Using iter_sample_easy 0.0009 s
Sampling 10000 from 100000
Using iter_sample_fast 0.1830 s
Using iter_sample_dist_naive 0.0402 s
Using iter_sample_easy 0.0103 s
Sampling 100000 from 1000000
Using iter_sample_fast 1.7965 s
Using iter_sample_dist_naive 0.6726 s
Using iter_sample_easy 0.2268 s
Sampling 200000 from 1000000
Using iter_sample_fast 1.7467 s
Using iter_sample_dist_naive 0.8209 s
Using iter_sample_easy 0.3297 s
我们真正获胜的地方是samplesize
相对于len(list(iterable))
很小。
Sampling 20 from 10000
Using iterSample 0.0202 s
Using sample_from_iterable 0.0047 s
Using iter_sample_fast 0.0196 s
Using iter_sample_easy 0.0001 s
Using iter_sample_dist_naive 0.0004 s
Sampling 20 from 100000
Using iterSample 0.2004 s
Using sample_from_iterable 0.0522 s
Using iter_sample_fast 0.1903 s
Using iter_sample_easy 0.0016 s
Using iter_sample_dist_naive 0.0029 s
Sampling 20 from 1000000
Using iterSample 1.9343 s
Using sample_from_iterable 0.4907 s
Using iter_sample_fast 1.9533 s
Using iter_sample_easy 0.0211 s
Using iter_sample_dist_naive 0.0319 s
Sampling 20 from 10000000
Using iterSample 18.6686 s
Using sample_from_iterable 4.8120 s
Using iter_sample_fast 19.3525 s
Using iter_sample_easy 0.3162 s
Using iter_sample_dist_naive 0.3210 s
Sampling 20 from 100000000
Using iter_sample_easy 2.8248 s
Using iter_sample_dist_naive 3.3817 s
答案 4 :(得分:0)
如果已知迭代器中的项目数(通过别处计算项目),则另一种方法是:
def iter_sample(iterable, iterlen, samplesize):
if iterlen < samplesize:
raise ValueError("Sample larger than population.")
indexes = set()
while len(indexes) < samplesize:
indexes.add(random.randint(0,iterlen))
indexesiter = iter(sorted(indexes))
current = indexesiter.next()
ret = []
for i, item in enumerate(iterable):
if i == current:
ret.append(item)
try:
current = indexesiter.next()
except StopIteration:
break
random.shuffle(ret)
return ret
我发现这个更快,特别是当sampsize相对于iterlen较小时。然而,当要求整个或接近整个样本时,存在问题。
iter_sample(iterlen = 10000,samplesize = 100)时间:(1,'ms') iter_sample_fast(iterlen = 10000,samplesize = 100)时间:(15,'ms')
iter_sample(iterlen = 1000000,samplesize = 100)时间:(65,'ms') iter_sample_fast(iterlen = 1000000,samplesize = 100)时间:(1477,'ms')
iter_sample(iterlen = 1000000,samplesize = 1000)时间:(64,'ms') iter_sample_fast(iterlen = 1000000,samplesize = 1000)时间:(1459,'ms')
iter_sample(iterlen = 1000000,samplesize = 10000)时间:(86,'ms') iter_sample_fast(iterlen = 1000000,samplesize = 10000)时间:(1480,'ms')
iter_sample(iterlen = 1000000,samplesize = 100000)时间:(388,'ms') iter_sample_fast(iterlen = 1000000,samplesize = 100000)时间:(1521,'ms')
iter_sample(iterlen = 1000000,samplesize = 1000000)时间:(25359,'ms') iter_sample_fast(iterlen = 1000000,samplesize = 1000000)时间:(2178,'ms')
答案 5 :(得分:0)
最快的方法,直到你知道发生器有多长(并且渐近均匀分布)时才会证明:
def gen_sample(generator_list, sample_size, iterlen):
num = 0
inds = numpy.random.random(iterlen) <= (sample_size * 1.0 / iterlen)
results = []
iterator = iter(generator_list)
gotten = 0
while gotten < sample_size:
try:
b = iterator.next()
if inds[num]:
results.append(b)
gotten += 1
num += 1
except:
num = 0
iterator = iter(generator_list)
inds = numpy.random.random(iterlen) <= ((sample_size - gotten) * 1.0 / iterlen)
return results
它既是小迭代中最快的,也是巨大的可迭代(也可能介于其间)
# Huge
res = gen_sample(xrange(5000000), 200000, 5000000)
timing: 1.22s
# Small
z = gen_sample(xrange(10000), 1000, 10000)
timing: 0.000441
答案 6 :(得分:0)
如果已知种群大小 n ,则以下是一些内存高效代码,该代码在生成器上循环,仅提取目标样本:
from random import sample
from itertools import count, compress
targets = set(sample(range(n), k=10))
for selection in compress(pop, map(targets.__contains__, count())):
print(selection)
这将按总体生成器生成的顺序输出选择。
该技术是使用标准库 random.sample()随机选择目标索引。第二个like决定给定索引是否在目标中,如果是,则从生成器中给出相应的值。
例如,给定目标{6, 2, 4}
:
0 1 2 3 4 5 6 7 8 9 10 ... output of count()
F F T F T F T F F F F ... is the count in targets?
A B C D E F G H I J K ... output of the population generator
- - C - E - G - - - - ... selections emitted by compress
此技术适用于循环太大而无法容纳内存的语料库(否则,您可以直接在总体上使用 sample())。
答案 7 :(得分:0)
这是一个完全不同的变体,它使用一组作为一组项目。首先,用pool
项对存储桶进行灌注,然后从存储桶中获取样本,从迭代器中替换它们,最后将存储桶中的剩余物排干。
HashWrapper
用于隐藏set
中不可散列的类型。
class HashWrapper(tuple):
"""Wrap unhashable type."""
def __hash__(self):
return id(self)
def randomize_iterator(data: Iterator, pool=100) -> Iterator:
"""
Randomize an iterator.
"""
bucket = set()
iterator = iter(data)
# Prime the bucket
for _ in range(pool):
try:
bucket.add(HashWrapper(next(iterator)))
except StopIteration:
# We've drained the iterator
break
# Start picking from the bucket and replacing new items from the iterator
for item in iterator:
sample, = random.sample(bucket, 1)
yield sample
bucket.remove(sample)
bucket.add(HashWrapper(item))
# Drain the bucket
yield from random.sample(bucket, len(bucket))