当我发现以下奇怪之处时,我正在使用以下网络代码块。在第一种情况下,我在稀疏矩阵上使用了ufunc multiply(*),它意外地正确地给了我一个度序列。然而,当使用普通矩阵完成相同的操作时,它会给我一个10 x 10矩阵,正如预期的那样,np.dot(...)给出了正确的结果。
import numpy as np
import networks as nx
ba = nx.barabasi_albert_graph(n=10, m=2)
A = nx.adjacency_matrix(ba)
# <10x10 sparse matrix of type '<class 'numpy.int64'>'
# with 32 stored elements in Compressed Sparse Row format>
A * np.ones(10)
# output: array([ 5., 3., 4., 5., 4., 3., 2., 2., 2., 2.])
nx.degree(ba)
# output {0: 5, 1: 3, 2: 4, 3: 5, 4: 4, 5: 3, 6: 2, 7: 2, 8: 2, 9: 2}
B = np.ones(100).reshape(10, 10)
B * np.ones(10)
array([[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]])
np.dot(B, np.ones(10))
# array([ 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.])
我原以为我应该做np.dot(A, np.ones(10))
但是会返回一个10,10 x 10矩阵的数组
array([ <10x10 sparse matrix of type '<class 'numpy.float64'>'
with 32 stored elements in Compressed Sparse Row format>,
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 32 stored elements in Compressed Sparse Row format>,
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 32 stored elements in Compressed Sparse Row format>,
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 32 stored elements in Compressed Sparse Row format>,
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 32 stored elements in Compressed Sparse Row format>,
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 32 stored elements in Compressed Sparse Row format>,
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 32 stored elements in Compressed Sparse Row format>,
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 32 stored elements in Compressed Sparse Row format>,
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 32 stored elements in Compressed Sparse Row format>,
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 32 stored elements in Compressed Sparse Row format>], dtype=object)
这里的细微差别是什么?
答案 0 :(得分:0)
对于常规numpy数组,*
乘以元素(broadcasting
)。 np.dot
是矩阵乘积,即乘积和。对于np.matrix
子类*
,矩阵乘积为dot
。 sparse.matrix
不是子类,但它以此为模型。 *
是矩阵产品。
In [694]: A = sparse.random(10,10,.2, format='csr')
In [695]: A
Out[695]:
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 20 stored elements in Compressed Sparse Row format>
In [696]: A *np.ones(10)
Out[696]:
array([ 0.6349177 , 0. , 1.25781168, 1.12021258, 2.43477065,
1.10407149, 1.95096264, 0.6253589 , 0.44242708, 0.50353061])
稀疏矩阵具有dot
方法,其行为相同:
In [698]: A.dot(np.ones(10))
Out[698]:
array([ 0.6349177 , 0. , 1.25781168, 1.12021258, 2.43477065,
1.10407149, 1.95096264, 0.6253589 , 0.44242708, 0.50353061])
密集版本:
In [699]: np.dot(A.A,np.ones(10))
Out[699]:
array([ 0.6349177 , 0. , 1.25781168, 1.12021258, 2.43477065,
1.10407149, 1.95096264, 0.6253589 , 0.44242708, 0.50353061])
我认为np.dot
应该正确处理稀疏矩阵,这与他们自己的方法不同。但是np.dot(A,np.ones(10))
没有做到这一点,产生了2个稀疏矩阵的对象数组。我可以深入研究为什么,但现在,避免它。
通常,使用稀疏矩阵的稀疏函数和方法。不要认为numpy
函数会正确使用它们。
np.dot
工作正常,
In [702]: np.dot(A,A)
Out[702]:
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 32 stored elements in Compressed Sparse Row format>
In [703]: np.dot(A,A.T)
Out[703]:
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 31 stored elements in Compressed Sparse Row format>
In [705]: np.dot(A, sparse.csr_matrix(np.ones(10)).T)
Out[705]:
<10x1 sparse matrix of type '<class 'numpy.float64'>'
with 9 stored elements in Compressed Sparse Row format>
In [706]: _.A
Out[706]:
array([[ 0.6349177 ],
[ 0. ],
[ 1.25781168],
[ 1.12021258],
[ 2.43477065],
[ 1.10407149],
[ 1.95096264],
[ 0.6253589 ],
[ 0.44242708],
[ 0.50353061]])
使用这种矩阵产品执行稀疏sum
的价值是什么:
In [708]: A.sum(axis=1)
Out[708]:
matrix([[ 0.6349177 ],
[ 0. ],
[ 1.25781168],
[ 1.12021258],
[ 2.43477065],
[ 1.10407149],
[ 1.95096264],
[ 0.6253589 ],
[ 0.44242708],
[ 0.50353061]])