我想把这个多维数组写成软代码
adjust = np.array([[0.,1.,0.],
[0.,1.,0.],
[0.,1.,0.],
[0.,1.,0.],
[0.,1.,0.],
[0.,1.,0.],
[0.,1.,0.],
[0.,1.,0.],
[0.,1.,0.],
[0.,1.,0.]])
我为np.repeat(x, 10, axis=0)
尝试了x = [0.,1.,0.]
,它在同一个方括号[]中重复了它们。可以在这里使用np.repeat吗?或其他numpy?
此外,是否可以为
编写软代码adjust = np.array([[0.,0.,1.,0.,0.],
[0.,0.,1.,0.,0.],
[0.,0.,1.,0.,0.]])
我将来可能需要以不同的数字扩展左右0?
答案 0 :(得分:4)
您可以在重复之前向阵列添加另一个轴(注意我们重复n = 1000
次)
n = 1000
%timeit adjust = np.repeat(np.array([0., 1., 0.])[None, :], n, axis=0)
# 7.67 µs ± 940 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
或重塑并转置repeat()
%timeit adjust = np.repeat([0., 1., 0.], n, axis=0).reshape(3, -1).T
# 22.5 µs ± 1.13 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
或使用广播
%timeit adjust = np.array([0., 1., 0.]) * np.ones(n)[:, None]
# 26.8 µs ± 880 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
为了进行性能比较,艾伦的建议是:
%timeit adjust = np.asarray([ 0., 1., 0.] * n).reshape(n,-1)
# 93.5 µs ± 7.87 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
和Divakars建议
%timeit adjust = np.tile(np.array([0., 1., 0.]), (n, 1))
# 11.1 µs ± 686 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
结论:添加另一个轴后np.repeat()
是最快的。
答案 1 :(得分:1)
重复你的列表n次,把它放在一个numpy数组中,然后重新整形为n行。
np.asarray([ 0., 1., 0.]*10).reshape(10,-1)
Out[139]:
array([[ 0., 1., 0.],
[ 0., 1., 0.],
[ 0., 1., 0.],
...,
[ 0., 1., 0.],
[ 0., 1., 0.],
[ 0., 1., 0.]])
同样适用于您的第二个阵列:
np.asarray([0.,0.,1.,0.,0.]*3).reshape(3,-1)
Out[140]:
array([[ 0., 0., 1., 0., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 0., 1., 0., 0.]])
时序:
%timeit np.asarray([ 0., 1., 0.]*10).reshape(10,-1)
The slowest run took 14.97 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 4.51 µs per loop
%timeit np.repeat([0., 1., 0.], 10, axis=0).reshape(3, -1).T
The slowest run took 4.44 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 11.3 µs per loop
%timeit np.array([0., 1., 0.]) * np.ones(10)[:, None]
The slowest run took 10.28 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 11.3 µs per loop