MATLAB矩阵功率算法

时间:2017-05-06 21:18:54

标签: python matlab scipy matrix-multiplication

我希望将算法从MATLAB移植到Python。所述算法中的一个步骤涉及取A^(-1/2),其中A是9×9方形复矩阵。据我所知,矩阵的平方根(以及它们的反转)不是唯一的。

我一直在使用scipy.linalg.fractional_matrix_power进行实验,并使用内置A^(-1/2) = exp((-1/2)*log(A))expm功能的numpy logm进行近似。前者非常差,只能提供3位小数的精度,而后者对于左上角的元素来说是正确的,但随着向下和向右移动会逐渐变差。这可能是也可能不是表达式的完全有效的数学解决方案,但它对于此应用程序来说还不够。

因此,我希望在Python中直接实现MATLAB的矩阵功率算法,这样我每次都可以100%确认相同的结果。有没有人有任何关于这将如何工作的见解或文件?这种算法越可并行化,越好,最终目标是在OpenCL中重写它以进行GPU加速。

编辑:根据要求提供MCVE:

[[(0.591557294607941+4.33680868994202e-19j), (-0.219707725574605-0.35810724986609j), (-0.121305654177909+0.244558388829046j), (0.155552026648172-0.0180264818714123j), (-0.0537690384136066-0.0630740244116577j), (-0.0107526931263697+0.0397896274845627j), (0.0182892503609312-0.00653264433724856j), (-0.00710188853532244-0.0050445035279044j), (-2.20414002823034e-05+0.00373184532662288j)], [(-0.219707725574605+0.35810724986609j), (0.312038814492119+2.16840434497101e-19j), (-0.109433401402399-0.174379997015402j), (-0.0503362231078033+0.108510948023091j), (0.0631826956936223-0.00992931123813742j), (-0.0219902325360141-0.0233215237172002j), (-0.00314837555001163+0.0148621558916679j), (0.00630295247506065-0.00266790359447072j), (-0.00249343102520442-0.00156160619280611j)], [(-0.121305654177909-0.244558388829046j), (-0.109433401402399+0.174379997015402j), (0.136649392858215-1.76182853028894e-19j), (-0.0434623984527311-0.0669251299161109j), (-0.0168737559719828+0.0393768358149159j), (0.0211288536117387-0.00417146769324491j), (-0.00734306979471257-0.00712443264825166j), (-0.000742681625102133+0.00455752452374196j), (0.00179068247786595-0.000862706240042082j)], [(0.155552026648172+0.0180264818714123j), (-0.0503362231078033-0.108510948023091j), (-0.0434623984527311+0.0669251299161109j), (0.0467980890488569+5.14996031930615e-19j), (-0.0140208255975664-0.0209483313237692j), (-0.00472995448413803+0.0117916398375124j), (0.00589653974090387-0.00134198920550751j), (-0.00202109265416585-0.00184021636458858j), (-0.000150793859056431+0.00116822322464066j)], [(-0.0537690384136066+0.0630740244116577j), (0.0631826956936223+0.00992931123813742j), (-0.0168737559719828-0.0393768358149159j), (-0.0140208255975664+0.0209483313237692j), (0.0136137125669776-2.03287907341032e-20j), (-0.00387854073283377-0.0056769786724813j), (-0.0011741038702424+0.00306007798625676j), (0.00144000687517355-0.000355251914809693j), (-0.000481433965262789-0.00042129815655098j)], [(-0.0107526931263697-0.0397896274845627j), (-0.0219902325360141+0.0233215237172002j), (0.0211288536117387+0.00417146769324491j), (-0.00472995448413803-0.0117916398375124j), (-0.00387854073283377+0.0056769786724813j), (0.00347771689075251+8.21621958836671e-20j), (-0.000944046302699304-0.00136521328407881j), (-0.00026318475762475+0.000704212317211994j), (0.00031422288569727-8.10033316327328e-05j)], [(0.0182892503609312+0.00653264433724856j), (-0.00314837555001163-0.0148621558916679j), (-0.00734306979471257+0.00712443264825166j), (0.00589653974090387+0.00134198920550751j), (-0.0011741038702424-0.00306007798625676j), (-0.000944046302699304+0.00136521328407881j), (0.000792908166233942-7.41153828847513e-21j), (-0.00020531962049495-0.000294952695922854j), (-5.36226164765808e-05+0.000145645628243286j)], [(-0.00710188853532244+0.00504450352790439j), (0.00630295247506065+0.00266790359447072j), (-0.000742681625102133-0.00455752452374196j), (-0.00202109265416585+0.00184021636458858j), (0.00144000687517355+0.000355251914809693j), (-0.00026318475762475-0.000704212317211994j), (-0.00020531962049495+0.000294952695922854j), (0.000162971629601464-5.39321759384574e-22j), (-4.03304806590714e-05-5.77159110863666e-05j)], [(-2.20414002823034e-05-0.00373184532662288j), (-0.00249343102520442+0.00156160619280611j), (0.00179068247786595+0.000862706240042082j), (-0.000150793859056431-0.00116822322464066j), (-0.000481433965262789+0.00042129815655098j), (0.00031422288569727+8.10033316327328e-05j), (-5.36226164765808e-05-0.000145645628243286j), (-4.03304806590714e-05+5.77159110863666e-05j), (3.04302590501313e-05-4.10281583826302e-22j)]]

1 个答案:

答案 0 :(得分:0)

我可以想到两种解释,在这两种情况下我都指责用户错误。按时间顺序:

理论#1(微妙的理论)

我怀疑你是将输入矩阵的打印的值从一个代码复制到另一个代码中。即当你切换代码时,你会丢掉双精度,在反平方根计算过程中会被放大。

作为证明,我将MATLAB的反平方根与您在python中使用的函数进行了比较。由于尺寸考虑,我将展示一个3x3示例,但是 - 扰流警告 - 我使用9x9随机矩阵做了同样的事情,得到了条件编号11.245754109790719(MATLAB)和11.245754109790818(numpy)的两个结果。这应该告诉你一些关于结果的相似性,而不必保存和加载两个代码之间的实际矩阵。我建议你这样做:关键字是scipy.io.loadmatsavemat

我所做的是在python中生成随机数据(因为这是我更喜欢的):

>>> import numpy as np
>>> print((np.random.rand(3,3) + 1j*np.random.rand(3,3)).tolist())
[[(0.8404782758300281+0.29389006737780765j), (0.741574080512219+0.7944606900644321j), (0.12788250870304718+0.37304665786925073j)], [(0.8583402784463595+0.13952117266781894j), (0.2138809231406249+0.6233427148017449j), (0.7276466404131303+0.6480559739625379j)], [(0.1784816129006297+0.72452362541158j), (0.2870462766764591+0.8891190037142521j), (0.0980355896905617+0.03022344706473823j)]]

通过将相同的截断输出复制到两个代码中,我保证输入的对应关系。

MATLAB中的示例:

>> M = [[(0.8404782758300281+0.29389006737780765j), (0.741574080512219+0.7944606900644321j), (0.12788250870304718+0.37304665786925073j)]; [(0.8583402784463595+0.13952117266781894j), (0.2138809231406249+0.6233427148017449j), (0.7276466404131303+0.6480559739625379j)]; [(0.1784816129006297+0.72452362541158j), (0.2870462766764591+0.8891190037142521j), (0.0980355896905617+0.03022344706473823j)]];
>> A = M^(-0.5);
>> format long
>> disp(A)
  0.922112307438377 + 0.919346397931976i  0.108620882045523 - 0.649850434897895i -0.778737740194425 - 0.320654127149988i
 -0.423384022626231 - 0.842737730824859i  0.592015668030645 + 0.661682656423866i  0.529361991464903 - 0.388343838121371i
 -0.550789874427422 + 0.021129515921025i  0.472026152514446 - 0.502143106675176i  0.942976466768961 + 0.141839849623673i

>> cond(A)

ans =

   3.429368520364765

python中的示例:

>>> M = [[(0.8404782758300281+0.29389006737780765j), (0.741574080512219+0.7944606900644321j), (0.12788250870304718+0.37304665786925073j)], [(0.8583402784463595+0.13952117266781894j), (0.2138809231406249+0.6233427148017449j), (0.7276466404
... 131303+0.6480559739625379j)], [(0.1784816129006297+0.72452362541158j), (0.2870462766764591+0.8891190037142521j), (0.0980355896905617+0.03022344706473823j)]]

>>> A = fractional_matrix_power(M,-0.5)

>>> print(A)
[[ 0.92211231+0.9193464j   0.10862088-0.64985043j -0.77873774-0.32065413j]
 [-0.42338402-0.84273773j  0.59201567+0.66168266j  0.52936199-0.38834384j]
 [-0.55078987+0.02112952j  0.47202615-0.50214311j  0.94297647+0.14183985j]]

>>> np.linalg.cond(A)
3.4293685203647408

我怀疑如果你将矩阵变成scipy.io.loadmat进入python,进行计算,scipy.io.savemat结果并用MATLAB加载它,你会看到绝对值小于1e-12结果之间的错误(希望甚至更少)。

理论#2(facepalm one)

我怀疑你是在使用python 2,而你的-1/2驱动的分区是一个简单的反转:

>>> # python 3 below
>>> # python 3's // is python 2's /, i.e. integer division
>>> 1/2
0.5
>>> 1//2
0
>>> -1/2
-0.5
>>> -1//2
-1

所以,如果您使用的是python 2,那么请调用

fractional_matrix_power(M,-1/2)

实际上是M的倒数。显而易见的解决方案是切换到python 3.不太明显的解决方案是继续使用python 2(你不应该,如上面的例子),但使用

from __future__ import division

在每个源文件的顶部。这将覆盖简单/除法运算符的行为,以便它反映python 3版本,并且您将减少一个问题。