如何迭代字典 - 一次n个键值对

时间:2015-01-19 10:14:37

标签: python dictionary

我有一个包含数千个元素的非常大的字典。我需要用这个字典作为参数执行一个函数。现在,我不想在一次执行中传递整个字典,而是希望批量执行该函数 - 一次使用字典的x键值对。

我正在做以下事情:

mydict = ##some large hash
x = ##batch size
def some_func(data):
    ##do something on data
temp = {}
for key,value in mydict.iteritems():
        if len(temp) != 0 and len(temp)%x == 0:
                some_func(temp)
                temp = {}
                temp[key] = value
        else:
                temp[key] = value
if temp != {}:
        some_func(temp)

这对我来说看起来非常狡猾。我想知道是否有一种优雅/更好的方式来做到这一点。

3 个答案:

答案 0 :(得分:6)

我经常使用这个小工具:

import itertools

def chunked(it, size):
    it = iter(it)
    while True:
        p = tuple(itertools.islice(it, size))
        if not p:
            break
        yield p

对于您的用例:

for chunk in chunked(big_dict.iteritems(), batch_size):
    func(chunk)

答案 1 :(得分:1)

以下是根据我的早期答案改编的两种解决方案。

或者,您可以从字典中获取items列表,并从该列表的切片中创建新的dict。然而,这并不是最佳的,因为它会复制大量字典。

def chunks(dictionary, size):
    items = dictionary.items()
    return (dict(items[i:i+size]) for i in range(0, len(items), size))

或者,您可以使用某些itertools模块的函数在循环时生成(生成)新的子字典。这与@ georg的答案类似,只是使用for循环。

from itertools import chain, islice
def chunks(dictionary, size):
    iterator = dictionary.iteritems()
    for first in iterator:
        yield dict(chain([first], islice(iterator, size - 1)))

示例用法。对于这两种情况:

mydict = {i+1: chr(i+65) for i in range(26)}
for sub_d in chunks2(mydict, 10):
    some_func(sub_d)

答案 2 :(得分:0)

来自more-itertools

def chunked(iterable, n):
    """Break an iterable into lists of a given length::
        >>> list(chunked([1, 2, 3, 4, 5, 6, 7], 3))
        [[1, 2, 3], [4, 5, 6], [7]]
    If the length of ``iterable`` is not evenly divisible by ``n``, the last
    returned list will be shorter.
    This is useful for splitting up a computation on a large number of keys
    into batches, to be pickled and sent off to worker processes. One example
    is operations on rows in MySQL, which does not implement server-side
    cursors properly and would otherwise load the entire dataset into RAM on
    the client.
    """
    # Doesn't seem to run into any number-of-args limits.
    for group in (list(g) for g in izip_longest(*[iter(iterable)] * n,
                                                fillvalue=_marker)):
        if group[-1] is _marker:
            # If this is the last group, shuck off the padding:
            del group[group.index(_marker):]
        yield group