如何对元组列表进行分解?

时间:2017-05-26 18:20:25

标签: python pandas numpy

定义
factorize:将每个唯一对象映射到一个唯一的整数。通常,映射到的整数范围从零到n - 1,其中n是唯一对象的数量。两种变化也是典型的。类型1是编号以识别唯一对象的顺序发生的位置。类型2是首先对唯一对象进行排序的位置,然后应用与类型1中相同的过程。

设置
考虑元组列表tups

tups = [(1, 2), ('a', 'b'), (3, 4), ('c', 5), (6, 'd'), ('a', 'b'), (3, 4)]

我想将其分解为

[0, 1, 2, 3, 4, 1, 2]

我知道有很多方法可以做到这一点。但是,我希望尽可能高效地完成这项工作。

我尝试了什么

pandas.factorize并收到错误...

pd.factorize(tups)[0]

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-84-c84947ac948c> in <module>()
----> 1 pd.factorize(tups)[0]

//anaconda/envs/3.6/lib/python3.6/site-packages/pandas/core/algorithms.py in factorize(values, sort, order, na_sentinel, size_hint)
    553     uniques = vec_klass()
    554     check_nulls = not is_integer_dtype(original)
--> 555     labels = table.get_labels(values, uniques, 0, na_sentinel, check_nulls)
    556 
    557     labels = _ensure_platform_int(labels)

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_labels (pandas/_libs/hashtable.c:21804)()

ValueError: Buffer has wrong number of dimensions (expected 1, got 2)

numpy.unique并得到错误的结果......

np.unique(tups, return_inverse=1)[1]

array([0, 1, 6, 7, 2, 3, 8, 4, 5, 9, 6, 7, 2, 3])

我可以在元组的哈希中使用其中任何一个

pd.factorize([hash(t) for t in tups])[0]

array([0, 1, 2, 3, 4, 1, 2])

耶!这就是我想要的......那么问题是什么?

第一个问题
看看这种技术的性能下降

lst = [10, 7, 4, 33, 1005, 7, 4]

%timeit pd.factorize(lst * 1000)[0]
1000 loops, best of 3: 506 µs per loop

%timeit pd.factorize([hash(i) for i in lst * 1000])[0]
1000 loops, best of 3: 937 µs per loop

第二个问题
哈希并不是唯一的保证!

问题
什么是分解元组列表的超快速方法?

时序
两个轴都在日志空间

enter image description here

code

from itertools import count

def champ(tups):
    d = {}
    c = count()
    return np.array(
        [d[tup] if tup in d else d.setdefault(tup, next(c)) for tup in tups]
    )

def root(tups):
    return pd.Series(tups).factorize()[0]

def iobe(tups):
    return np.unique(tups, return_inverse=True, axis=0)[1]

def get_row_view(a):
    void_dt = np.dtype((np.void, a.dtype.itemsize * np.prod(a.shape[1:])))
    a = np.ascontiguousarray(a)
    return a.reshape(a.shape[0], -1).view(void_dt).ravel()

def diva(tups):
    return np.unique(get_row_view(np.array(tups)), return_inverse=1)[1]

def gdib(tups):
    return pd.factorize([str(t) for t in tups])[0]

from string import ascii_letters

def tups_creator_1(size, len_of_str=3, num_ints_to_choose_from=1000, seed=None):
    c = len_of_str
    n = num_ints_to_choose_from
    np.random.seed(seed)
    d = pd.DataFrame(np.random.choice(list(ascii_letters), (size, c))).sum(1).tolist()
    i = np.random.randint(n, size=size)
    return list(zip(d, i))

results = pd.DataFrame(
    index=pd.Index([100, 1000, 5000, 10000, 20000, 30000, 40000, 50000], name='Size'),
    columns=pd.Index('champ root iobe diva gdib'.split(), name='Method')
)

for i in results.index:
    tups = tups_creator_1(i, max(1, int(np.log10(i))), max(10, i // 10))
    for j in results.columns:
        stmt = '{}(tups)'.format(j)
        setup = 'from __main__ import {}, tups'.format(j)
        results.set_value(i, j, timeit(stmt, setup, number=100) / 100)

results.plot(title='Avg Seconds', logx=True, logy=True)

7 个答案:

答案 0 :(得分:7)

一种简单的方法是使用dict来保留以前的访问次数:

>>> d = {}
>>> [d.setdefault(tup, i) for i, tup in enumerate(tups)]
[0, 1, 2, 3, 4, 1, 2]

如果您需要保持数字顺序,那么稍作修改:

>>> from itertools import count
>>> c = count()
>>> [d[tup] if tup in d else d.setdefault(tup, next(c)) for tup in tups]
[0, 1, 2, 3, 4, 1, 2, 5]

或者写成:

>>> [d.get(tup) or d.setdefault(tup, next(c)) for tup in tups]
[0, 1, 2, 3, 4, 1, 2, 5]

答案 1 :(得分:7)

将您的元组列表初始化为系列,然后调用factorize

pd.Series(tups).factorize()[0]

[0 1 2 3 4 1 2]

答案 2 :(得分:3)

@AChampion's使用setdefault让我想知道defaultdict是否可用于此问题。所以从AC的回答中自由地抄袭:

In [189]: tups = [(1, 2), ('a', 'b'), (3, 4), ('c', 5), (6, 'd'), ('a', 'b'), (3, 4)]

In [190]: import collections
In [191]: import itertools
In [192]: cnt = itertools.count()
In [193]: dd = collections.defaultdict(lambda : next(cnt))

In [194]: [dd[t] for t in tups]
Out[194]: [0, 1, 2, 3, 4, 1, 2]

其他SO问题中的时间表明,defaultdict比直接使用setdefault慢一些。这种方法的简洁性仍然很有吸引力。

In [196]: dd
Out[196]: 
defaultdict(<function __main__.<lambda>>,
            {(1, 2): 0, (3, 4): 2, ('a', 'b'): 1, (6, 'd'): 4, ('c', 5): 3})

答案 3 :(得分:2)

方法#1

将每个元组转换为2D数组的一行,使用NumPy ndarray的views概念将每个行视为一个标量,最后使用np.unique(... return_inverse=True)进行分解 -

np.unique(get_row_view(np.array(tups)), return_inverse=1)[1]

get_row_view取自here

示例运行 -

In [23]: tups
Out[23]: [(1, 2), ('a', 'b'), (3, 4), ('c', 5), (6, 'd'), ('a', 'b'), (3, 4)]

In [24]: np.unique(get_row_view(np.array(tups)), return_inverse=1)[1]
Out[24]: array([0, 3, 1, 4, 2, 3, 1])

方法#2

def argsort_unique(idx):
    # Original idea : https://stackoverflow.com/a/41242285/3293881 
    n = idx.size
    sidx = np.empty(n,dtype=int)
    sidx[idx] = np.arange(n)
    return sidx

def unique_return_inverse_tuples(tups):
    a = np.array(tups)
    sidx = np.lexsort(a.T)
    b = a[sidx]
    mask0 = ~((b[1:,0] == b[:-1,0]) & (b[1:,1] == b[:-1,1]))
    ids = np.concatenate(([0], mask0  ))
    np.cumsum(ids, out=ids)
    return ids[argsort_unique(sidx)]

示例运行 -

In [69]: tups
Out[69]: [(1, 2), ('a', 'b'), (3, 4), ('c', 5), (6, 'd'), ('a', 'b'), (3, 4)]

In [70]: unique_return_inverse_tuples(tups)
Out[70]: array([0, 3, 1, 2, 4, 3, 1])

答案 4 :(得分:2)

我不了解时间,但一个简单的方法是沿着各自的轴使用numpy.unique

tups = [(1, 2), ('a', 'b'), (3, 4), ('c', 5), (6, 'd'), ('a', 'b'), (3, 4)]
res = np.unique(tups, return_inverse=1, axis=0)
print res

产生

(array([['1', '2'],
       ['3', '4'],
       ['6', 'd'],
       ['a', 'b'],
       ['c', '5']],
      dtype='|S11'), array([0, 3, 1, 4, 2, 3, 1], dtype=int64))

数组会自动排序,但这应该不是问题。

答案 5 :(得分:1)

我打算给出这个答案

pd.factorize([str(x) for x in tups])

但是,经过一些测试后,它并没有成为最快的。由于我已经完成了工作,我将在此处进行比较:

@AChampion

%timeit [d[tup] if tup in d else d.setdefault(tup, next(c)) for tup in tups]
1000000 loops, best of 3: 1.66 µs per loop

@Divakar

%timeit np.unique(get_row_view(np.array(tups)), return_inverse=1)[1]
# 10000 loops, best of 3: 58.1 µs per loop

@self

%timeit pd.factorize([str(x) for x in tups])
# 10000 loops, best of 3: 65.6 µs per loop

@root

%timeit pd.Series(tups).factorize()[0] 
# 1000 loops, best of 3: 199 µs per loop

修改

对于100K条目的大数据,我们有:

tups = [(np.random.randint(0, 10), np.random.randint(0, 10)) for i in range(100000)]

@root

%timeit pd.Series(tups).factorize()[0] 
100 loops, best of 3: 10.9 ms per loop

@AChampion

%timeit [d[tup] if tup in d else d.setdefault(tup, next(c)) for tup in tups]

# 10 loops, best of 3: 16.9 ms per loop

@Divakar

%timeit np.unique(get_row_view(np.array(tups)), return_inverse=1)[1]
# 10 loops, best of 3: 81 ms per loop

@self

%timeit pd.factorize([str(x) for x in tups])
10 loops, best of 3: 87.5 ms per loop

答案 6 :(得分:0)

您可以使用 SKLearn 的 MultiLabelBinarizer,它将为您提供一系列二进制编码:

from sklearn.preprocessing import MultiLabelBinarizer

mlb = MultiLabelBinarizer()
codes = mlb.fit_transform(np.array(tups)) # Must be passed as an array
>>> codes
array([[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 1, 1, 0, 0],
       [0, 0, 1, 1, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 1, 0, 0, 0, 1, 0],
       [0, 0, 0, 0, 0, 1, 0, 0, 0, 1],
       [0, 0, 0, 0, 0, 0, 1, 1, 0, 0],
       [0, 0, 1, 1, 0, 0, 0, 0, 0, 0]])

可以使用 np.packbits(codes) 将这些转换为小数(如果需要):

array([192,   0, 195,   0,  34,   4,  64, 195,   0], dtype=uint8)