我有两个长度相同的pandas series
:
S1 =
0 -0.483415
1 -0.514082
2 -0.515724
3 -0.519375
4 -0.505685
...
S2 =
1 -0.961871
2 -0.964762
3 -0.963798
4 -0.962112
5 -0.962028
...
我想将它们压缩到numpy ndarray of ndarray
中,使其看起来像这样:
<class 'numpy.ndarray'>
[[-0.483415 -0.961871]
[-0.514082 -0.964762]
[-0.515724 -0.963798]
...
]
如果我想要list
中的tuple
,我可以这样说:
v = list(zip(S1, S2))
那给了我
<class 'list'>
[(-0.48341467662344273, -0.961871075696243),
(-0.5140815458448855, -0.9647615371349125),
...
]
我该如何执行相同的“ zip”,但又获得ndarray of ndarray
?我不要循环。
答案 0 :(得分:1)
Zip
不是必需的,为了获得更好的性能,请使用numpy
或pandas
:
arr = np.hstack((S1.values[:, None], S2.values[:, None]))
或者:
arr = np.vstack((S1, S2)).T
或者:
arr = pd.concat([S1.reset_index(drop=True), S2.reset_index(drop=True)], axis=1).values
或者:
arr = np.c_[S1, S2]
print (arr)
[[-0.483415 -0.961871]
[-0.514082 -0.964762]
[-0.515724 -0.963798]
[-0.519375 -0.962112]
[-0.505685 -0.962028]]
性能:
#50k values
S1 = pd.concat([S1] * 10000, ignore_index=True)
S2 = pd.concat([S2] * 10000, ignore_index=True)
In [107]: %timeit arr = np.hstack((S1.values[:, None], S2.values[:, None]))
133 µs ± 15.9 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [108]: %timeit arr = np.vstack((S1, S2)).T
176 µs ± 12 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [109]: %timeit arr = pd.concat([S1.reset_index(drop=True), S2.reset_index(drop=True)], axis=1).values
1.49 ms ± 74.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [110]: %timeit arr = np.c_[S1, S2]
320 µs ± 10.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [111]: %timeit np.array(list(zip(S1, S2)))
33 ms ± 545 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
答案 1 :(得分:1)
只需将其转换为ndarray
:
>>> a = [1,2,3,4]
>>> b = [5,6,7,8]
>>> c = list(zip(a, b))
>>> c
[(1, 1), (2, 2), (3, 3), (4, 4)]
>>> d = np.array(c)
>>> d
array([[1, 5],
[2, 6],
[3, 7],
[4, 8]])
>>> d.shape
(4, 2)
答案 2 :(得分:1)
尝试:
numpy.hstack((S1, S2))
我认为它应该起作用。