我想在Python的Numpy中获得矩阵X和Adjugate(X)的点积,我尝试了从here找到的两种方法。两种方法的佐剂给出相同的答案,但是当我执行点积运算时,给出了不同的答案。这是代码:
stylingdivs:{'childDiv1','childDiv4'}
以下是输出:
<div class="parentDiv">
<div class="childDiv1342">
First Div Child
</div>
<div class="childDiv2244">
Second Div Child
</div>
<div class="childDiv3342">
Third Div Child
</div>
<div class="childDiv4324">
Fourth Div Child
</div>
`<div class="childDiv5324">
Fifth Div Child
</div>`
`<div class="childDiv6324">
Sixth Div Child
</div>`
`<div class="childDiv7323">
Seventh Div Child
</div>`
</div>
然后我尝试通过将它们与X的逆矩阵进行比较来检查它,这次X的所有逆都给出相同的值
代码:
#first method
def CM(A):
row, col = A.shape
minor = np.zeros((row-1,col-1))
cofactor = list()
for r in range(row):
for c in range(col):
minor[:r,:c] = A[:r,:c]
minor[r:,:c] = A[r+1:,:c]
minor[:r,c:] = A[:r,c+1:]
minor[r:,c:] = A[r+1:,c+1:]
cofactor.append(np.linalg.det(minor)*(-1)**(r+c))
return np.array(cofactor).reshape(3,3)
#second method
def CM1(A):
return np.linalg.inv(A).T*np.linalg.det(A)
#define x
X = np.array([[-3,2,-5],[-1,0,-2],[3,-4,1]])
print("Output:\n")
print("X =\n", X)
#the adjugate from both method give same answer
print("\nAdjugate(X) =\n",CM(X).T)
print("\nAdj1(X) =\n",CM1(X).T)
#but when I perform dot product, different answers were given
print("\nX dot Adjugate(X) =\n",X.dot(CM(X).T))
print("\nX dot Adj1(X) =\n",X.dot(CM1(X).T))
输出:
Output:
X =
[[-3 2 -5]
[-1 0 -2]
[ 3 -4 1]]
Adjugate(X) =
[[-8. 18. -4.]
[-5. 12. -1.]
[ 4. -6. 2.]]
Adj1(X) =
[[-8. 18. -4.]
[-5. 12. -1.]
[ 4. -6. 2.]]
X dot Adjugate(X) =
[[-6.00000000e+00 1.42108547e-14 0.00000000e+00]
[-1.77635684e-15 -6.00000000e+00 0.00000000e+00]
[ 1.06581410e-14 -1.42108547e-14 -6.00000000e+00]]
X dot Adj1(X) =
[[-6.00000000e+00 6.21724894e-15 2.22044605e-15]
[ 1.77635684e-15 -6.00000000e+00 8.88178420e-16]
[-4.44089210e-15 -6.21724894e-15 -6.00000000e+00]]
有人可以向我解释为什么吗?