使用scipy.optimize.root

时间:2017-02-05 09:06:00

标签: numpy scipy sparse-matrix mathematical-optimization equation-solving

我想解决以下非线性方程组。

enter image description here

注释

  • dota_k之间的x代表dot product
  • 第一个等式中的0代表0 vector而第二个等式中的0代表scaler 0
  • 如果重要的话,所有的矩阵都很稀疏。

已知

  • Kn x n(肯定的)矩阵
  • 每个A_k是已知的(对称)矩阵
  • 每个a_k是已知的n x 1向量
  • N已知(让我们说N = 50)。但我需要一种可以轻松改变N的方法。

未知(试图解决)

  • xn x 1向量。
  • 每个alpha_k 1 <= k <= N缩放器

我的想法。

我正在考虑使用scipy root来查找x和每个alpha_k。我们基本上从第一个方程的每一行得到n方程,从约束方程得到另一个N方程来求解我们的n + N变量。因此,我们有所需数量的方程式来解决问题。

我对xalpha_k's的初步猜测也很可靠。

玩具示例。

n = 4
N = 2
K = np.matrix([[0.5, 0, 0, 0], [0, 1, 0, 0],[0,0,1,0], [0,0,0,0.5]])
A_1 = np.matrix([[0.98,0,0.46,0.80],[0,0,0.56,0],[0.93,0.82,0,0.27],[0,0,0,0.23]])
A_2 = np.matrix([[0.23, 0,0,0],[0.03,0.01,0,0],[0,0.32,0,0],[0.62,0,0,0.45]])
a_1 = np.matrix(scipy.rand(4,1))
a_2 = np.matrix(scipy.rand(4,1))

我们正在努力解决

 x = [x1, x2, x3, x4] and alpha_1, alpha_2

问题:

  1. 我实际上可以强行解决这个玩具问题并将其提供给解算器。但是我该如何解决这个玩具问题,以便我可以轻松地将其扩展到我说n=50N=50
  2. 的情况
  3. 我可能必须明确地为更大的矩阵计算雅可比矩阵?
  4. 任何人都可以给我任何指示吗?

1 个答案:

答案 0 :(得分:1)

我认为scipy.optimize.root方法可以解决问题,但是对于这个方程组来说,避开微不足道的解决方案可能是真正的挑战。

无论如何,这个函数使用root来求解方程组。

def solver(x0, alpha0, K, A, a):
'''
x0     - nx1 numpy array. Initial guess on x.
alpha0 - nx1 numpy array. Initial guess on alpha.
K      - nxn numpy.array.
A      - Length N List of nxn numpy.arrays.
a      - Length N list of nx1 numpy.arrays.
'''

# Establish the function that produces the rhs of the system of equations.
n = K.shape[0]
N = len(A)
def lhs(x_alpha):
    '''
    x_alpha is a concatenation of x and alpha.
    '''

    x = np.ravel(x_alpha[:n])
    alpha = np.ravel(x_alpha[n:])
    lhs_top = np.ravel(K.dot(x))
    for k in xrange(N):
        lhs_top += alpha[k]*(np.ravel(np.dot(A[k], x)) + np.ravel(a[k]))

    lhs_bottom = [0.5*x.dot(np.ravel(A[k].dot(x))) + np.ravel(a[k]).dot(x)
                  for k in xrange(N)]

    lhs = np.array(lhs_top.tolist() + lhs_bottom)

    return lhs

# Solve the system of equations.
x0.shape = (n, 1)
alpha0.shape = (N, 1)
x_alpha_0 = np.vstack((x0, alpha0))
sol = root(lhs, x_alpha_0)
x_alpha_root = sol['x']

# Compute norm of residual.
res = sol['fun']
res_norm = np.linalg.norm(res)

# Break out the x and alpha components.
x_root = x_alpha_root[:n]
alpha_root = x_alpha_root[n:]


return x_root, alpha_root, res_norm

然而,在玩具示例上运行只会产生简单的解决方案。

# Toy example.
n = 4
N = 2
K = np.matrix([[0.5, 0, 0, 0], [0, 1, 0, 0],[0,0,1,0], [0,0,0,0.5]])
A_1 = np.matrix([[0.98,0,0.46,0.80],[0,0,0.56,0],[0.93,0.82,0,0.27],      
                [0,0,0,0.23]])
A_2 = np.matrix([[0.23, 0,0,0],[0.03,0.01,0,0],[0,0.32,0,0],
      [0.62,0,0,0.45]])
a_1 = np.matrix(scipy.rand(4,1))
a_2 = np.matrix(scipy.rand(4,1))
A = [A_1, A_2]
a = [a_1, a_2]
x0 = scipy.rand(n, 1)
alpha0 = scipy.rand(N, 1)

print 'x0 =', x0
print 'alpha0 =', alpha0

x_root, alpha_root, res_norm = solver(x0, alpha0, K, A, a)

print 'x_root =', x_root
print 'alpha_root =', alpha_root
print 'res_norm =', res_norm

输出

x0 = [[ 0.00764503]
 [ 0.08058471]
 [ 0.88300129]
 [ 0.85299622]]
alpha0 = [[ 0.67872815]
 [ 0.69693346]]
x_root = [  9.88131292e-324  -4.94065646e-324   0.00000000e+000        
          0.00000000e+000]
alpha_root = [ -4.94065646e-324   0.00000000e+000]
res_norm = 0.0