我正在尝试在我编写的GMRES代码中实现ILU预处理器(为了解决线性系统Ax = b。我正在尝试使用尺寸为25x25的简单三对角SPD矩阵。你可以看到我'用spilu方法计算预处理器。代码运行没有错误,但解决方案显然是错误的,因为在代码结束时,我打印b的范数和产品A * x的范数。它们是差不多...... 代码在没有预处理器的情况下运行良好,并且对于相同的矩阵以13次迭代收敛。
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
'Size controller'
matrixSize =25
'Building a tri-diagonal matrix'
def Atridiag(val_0, val_sup, val_inf, mSize):
cen = np.ones((1, mSize))*val_0
sup = np.ones((1, mSize-1))*val_sup
inf = np.ones((1, mSize-1))*val_inf
diag_cen = np.diagflat(cen, 0)
diag_sup = np.diagflat(sup, 1)
diag_inf = np.diagflat(inf, -1)
return diag_cen + diag_sup + diag_inf
A = Atridiag(2, -1, -1, matrixSize)
A = sp.sparse.csc_matrix (A)
'Plot matrix sparsity'
plt.clf()
plt.spy(A, marker ='.', markersize=2)
plt.show()
'random b and x0 vectors'
b = np.matrix(np.ones((matrixSize, 1)))
x = np.matrix(np.ones((matrixSize, 1)))
'Incomplete LU'
M = sp.sparse.linalg.dsolve.spilu(A)
M1 = lambda x: M.solve(x)
M2=sp.sparse.linalg.LinearOperator((matrixSize,matrixSize),M1)
'Initial Data'
nmax_iter = 30
rstart = 2
tol = 1e-7
e = np.zeros((nmax_iter + 1, 1))
rr = 1
'Starting GMRES'
for rs in range (0, rstart+1):
'first check on residual'
if rr < tol :
break
else:
r0 = (b - A.dot(x))
betha = np.linalg.norm(r0)
e[0] = betha
H = np.zeros((nmax_iter + 1, nmax_iter))
V = np.zeros((matrixSize, nmax_iter+1))
V[:, 0:1] = r0/betha
for k in range (1, nmax_iter+1):
'Appling the Preconditioner'
t = A.dot(V[:, k-1])
V[:, k] = M2.matvec(t)
'Ortogonalizzazione GS'
for j in range (k):
H[j, k-1] = np.dot(V[:, k].T, V[:, j])
V[:, k] = V[:, k] - (np.dot(H[j, k-1], V[:, j]))
H[k, k-1] = np.linalg.norm(V[:, k])
V[:, k] = V[:, k] / H[k, k-1]
'QR Decomposition'
n=k
Q = np.zeros((n+1, n))
R = np.zeros((n, n))
R[0, 0] = np.linalg.norm(H[0:n+2, 0])
Q[:, 0] = H[0:n+1, 0] / R[0,0]
for j in range (0, n+1):
t = H[0:n+1, j-1]
for i in range (0, j-1):
R[i, j-1] = np.dot(Q[:, i], t)
t = t - np.dot(R[i, j-1], Q[:, i])
R[j-1, j-1] = np.linalg.norm(t)
Q[:, j-1] = t / R[j-1, j-1]
g = np.dot(Q.T, e[0:k+1])
Z = np.dot(np.linalg.inv(R), g)
Res = e[0:n] - np.dot(H[0:n, 0:n], Z[0:n])
rr = np.linalg.norm(Res)
'second check on residual'
if rr < tol:
break
'Updating the solution'
x = x + np.dot(V[:, 0:k], Z)
print(sp.linalg.norm(b))
print(sp.linalg.norm(np.dot(A.todense(),x)))
真的希望有人能搞清楚!!
答案 0 :(得分:1)
也许为时已晚,但供以后参考:
更新x时,您忘记了乘以调节器:
x = x + M2.dot(np.dot(V[:, 0:k], Z) # M2.matvec() works the same
请参见here
通过此修复程序,该算法将在1次迭代中收敛。
其他评论:
M2 = sp.sparse.linalg.LinearOperator((matrixSize,matrixSize),M.solve)
Ax
和b
,最好打印出差异(残差),因为您会得到更加精确的结果:print(sp.linalg.norm(b - np.dot(A.todense(),x)))