Matlab的bsxfun() - 什么解释了在不同维度上扩展时的性能差异?

时间:2015-05-23 04:06:53

标签: performance matlab matrix vectorization bsxfun

在我的工作(计量经济学/统计学)中,我经常需要将不同大小的矩阵相乘,然后对结果矩阵执行附加操作。我一直依靠Request photoRequest = Request.newUploadStagingResourceWithImageRequest(session, bitmap, new Request.Callback() { @Override public void onCompleted(Response response) { if (mProgress != null && mProgress.isShowing()) mProgress.dismiss(); if (response.getError() == null) { Toast.makeText(NewPostActivity.this, "Successfully shared on the group", Toast.LENGTH_SHORT).show(); finish(); } else { Toast.makeText(NewPostActivity.this, "Facebook sharing error: " + response.getError().getErrorMessage(), Toast.LENGTH_SHORT).show(); } } }); Bundle params = photoRequest.getParameters(); if(message != null) { params.putString("message", message); photoRequest.setGraphPath(Constants.URL_FEEDS); } if(imageBytes != null) { params.putByteArray("picture", imageBytes); photoRequest.setGraphPath(Constants.URL_PHOTOS); } photoRequest.setParameters(params); photoRequest.setHttpMethod(HttpMethod.POST); photoRequest.executeAsync(); 来对代码进行矢量化,一般来说我发现代码比bsxfun()更有效。但我不明白的是,为什么有时候repmat()的性能在不同维度上扩展矩阵时会有很大差异。

考虑这个具体的例子:

bsxfun()

上下文

我们有来自 m 市场的数据,并且在每个市场中我们想要计算 exp(X * beta)的期望,其中 X 是a j x k 矩阵, beta k x 1 随机向量。我们通过monte-carlo集成来计算这个期望 - 为 beta 绘制 s 绘制,为每次绘制计算 exp(X * beta),然后采取平均值。通常我们使用 m>获取数据。 k> j ,我们使用非常大的 s 。在这个例子中,我只是让 X 成为一个矩阵。

我使用x = ones(j, k, m); beta = rand(k, m, s); exp_xBeta = zeros(j, m, s); for im = 1 : m for is = 1 : s xBeta = x(:, :, im) * beta(:, im, is); exp_xBeta(:, im, is) = exp(xBeta); end end y = mean(exp_xBeta, 3); 做了3个版本的矢量化,它们的区别在于 X beta 的形状:

向量化1

bsxfun()

向量化2

x1      = x;                                    % size [ j k m 1 ]
beta1   = permute(beta, [4 1 2 3]);             % size [ 1 k m s ]

tic
xBeta       = bsxfun(@times, x1, beta1);
exp_xBeta   = exp(sum(xBeta, 2));
y1          = permute(mean(exp_xBeta, 4), [1 3 2 4]);   % size [ j m ]
time1       = toc;

Vectorization 3

x2      = permute(x, [4 1 2 3]);                % size [ 1 j k m ]
beta2   = permute(beta, [3 4 1 2]);             % size [ s 1 k m ]

tic
xBeta       = bsxfun(@times, x2, beta2);
exp_xBeta   = exp(sum(xBeta, 3));
y2          = permute(mean(exp_xBeta, 1), [2 4 1 3]);   % size [ j m ]
time2       = toc;

这就是他们的表现(通常我们得到m> k> j的数据,我们使用了非常大的数据):

j = 5,k = 15,m = 100,s = 2000

x3      = permute(x, [2 1 3 4]);                % size [ k j m 1 ]
beta3   = permute(beta, [1 4 2 3]);             % size [ k 1 m s ]

tic
xBeta       = bsxfun(@times, x3, beta3);
exp_xBeta   = exp(sum(xBeta, 1));
y3          = permute(mean(exp_xBeta, 4), [2 3 1 4]);    % size [ j m ]
time3       = toc;

j = 10,k = 15,m = 150,s = 5000

For-loop version took 0.7286 seconds.
Vectorized version 1 took 0.0735 seconds.
Vectorized version 2 took 0.0369 seconds.
Vectorized version 3 took 0.0503 seconds.

j = 15,k = 35,m = 150,s = 5000

For-loop version took 2.7815 seconds.
Vectorized version 1 took 0.3565 seconds.
Vectorized version 2 took 0.2657 seconds.
Vectorized version 3 took 0.3433 seconds.

为什么版本2始终是最快的版本?最初,我认为性能优势是因为 s 设置为维度1,Matlab可能能够更快地计算,因为它以列主要顺序存储数据。但Matlab的分析师告诉我,计算平均值所花费的时间相当微不足道,并且在所有3个版本中差不多相同。 Matlab大部分时间用For-loop version took 3.4881 seconds. Vectorized version 1 took 1.0687 seconds. Vectorized version 2 took 0.8465 seconds. Vectorized version 3 took 0.9414 seconds. 来评估这一行,这也是3个版本中运行时差异最大的地方。

为什么版本1总是最慢的并且版本2总是最快的?

我在这里更新了我的测试代码: Code

编辑:此帖子的早期版本不正确。 bsxfun()的大小应为beta

3 个答案:

答案 0 :(得分:3)

bsxfun当然是向量化内容的好工具之一,但是如果你能以某种方式介绍matrix-multiplication这是最好的方法,那就是matrix multiplications are really fast on MATLAB

在这里,您可以使用matrix-multiplication来获取exp_xBeta -

[m1,n1,r1] = size(x);
n2 = size(beta,2);
exp_xBeta_matmult = reshape(exp(reshape(permute(x,[1 3 2]),[],n1)*beta),m1,r1,n2)

或直接获取y,如下所示 -

y_matmult = reshape(mean(exp(reshape(permute(x,[1 3 2]),[],n1)*beta),2),m1,r1)

<强>解释

为了更详细地解释它,我们将尺寸设为 -

x      : (j, k, m)
beta   : (k, s)

我们的最终目标是使用x使用betamatrix-multiplication来“消除”k。因此,我们可以将k中的x“推”到permute的末尾,然后重塑为2D保持k作为行,即(j * m,k)然后用beta(k,s)执行矩阵乘法,得到(j * m,s)。然后可以将产品重新整形为3D数组(j,m,s)并执行元素指数,即exp_xBeta

现在,如果最终目标为y,即获得exp_xBeta第三维的平均值,则相当于计算矩阵乘法乘积的平均值( j * m,s)然后重新变换为(j,m)直接得到y

答案 1 :(得分:1)

今天早上我做了一些实验。似乎它与Matlab以列主要顺序存储数据的事实有关。

在进行这些实验时,我还添加了矢量化版本4,它执行相同的操作,但命令尺寸与版本1-3略有不同。

总结一下,以下是所有4个版本中xbeta的排序方式:

矢量化1:

x       :   (j, k, m, 1)
beta    :   (1, k, m, s)

矢量化2:

x       :   (1, j, k, m)
beta    :   (s, 1, k, m)

矢量化3:

x       :   (k, j, m, 1)
beta    :   (k, 1, m, s)

矢量化4:

x       :   (1, k, j, m)
beta    :   (s, k, 1, m)

代码bsxfun_test.m

此代码中最昂贵的两项操作是:

(a)xBeta = bsxfun(@times, x, beta);

(b)exp_xBeta = exp(sum(xBeta, dimK));

其中dimKk的维度。

在(a)中,bsxfun()必须沿着x的维度沿着sbeta的维度展开j。当s比其他维度大得多时,我们应该会在向量化2和4中看到一些性能优势,因为它们会将s指定为第一维。

j = 100; k = 100; m = 100; s = 1000;

Vectorized version 1 took 2.4719 seconds.
Vectorized version 2 took 2.1419 seconds.
Vectorized version 3 took 2.5071 seconds.
Vectorized version 4 took 2.0825 seconds.

如果s变得微不足道而且k很大,那么矢量化3应该是最快的,因为它将k放在维度1中:

j = 10; k = 10000; m = 100; s = 1;

Vectorized version 1 took 0.0329 seconds.
Vectorized version 2 took 0.1442 seconds.
Vectorized version 3 took 0.0253 seconds.
Vectorized version 4 took 0.1415 seconds.

如果我们在最后一个示例中交换kj的值,则向量化1成为自j分配给维度1以来最快的值:

j = 10000; k = 10; m = 100; s = 1;

Vectorized version 1 took 0.0316 seconds.
Vectorized version 2 took 0.1402 seconds.
Vectorized version 3 took 0.0385 seconds.
Vectorized version 4 took 0.1608 seconds.

但通常当kj接近时,j > k并不一定意味着向量化1比向量化3更快,因为(a)和(b)中执行的操作是不同。

在实践中,我经常需要使用s >>>> m > k > j运行计算。在这种情况下,似乎在向量化2或4中对它们进行排序会得到最好的结果:

    j = 10; k = 30; m = 100; s = 5000;

Vectorized version 1 took 0.4621 seconds.
Vectorized version 2 took 0.3373 seconds.
Vectorized version 3 took 0.3713 seconds.
Vectorized version 4 took 0.3533 seconds.

    j = 15; k = 50; m = 150; s = 5000;

Vectorized version 1 took 1.5416 seconds.
Vectorized version 2 took 1.2143 seconds.
Vectorized version 3 took 1.2842 seconds.
Vectorized version 4 took 1.2684 seconds.

外卖:如果bsxfun()必须沿尺寸远大于其他尺寸的尺寸展开,请将该尺寸指定为尺寸1!

答案 2 :(得分:1)

请参阅此其他questionanswer

如果要使用bsxfun处理不同维度的矩阵,请确保矩阵的最大维度保持在第一维。

以下是我的小例子测试:

%// Inputs
%// Taking one very big and one small vector, so that the difference could be seen clearly
a = rand(1000000,1);
b = rand(1,5);

%//---------------- testing with inbuilt function
%// preferred orientation [1]
t1 = timeit(@() bsxfun(@times, a, b))

%// not preferred [2]
t2 = timeit(@() bsxfun(@times, b.', a.'))

%//---------------- testing with anonymous function
%// preferred orientation [1]
t3 = timeit(@() bsxfun(@(x,y) x*y, a, b))

%// not preferred [2]
t4 = timeit(@() bsxfun(@(x,y) x*y, b.', a.'))

[1]首选​​方向 - 较大尺寸作为第一维度
[2]不优选 - 较小尺寸作为第一维

  

小注意:即使它们的尺寸可能不同,所有四种方法给出的输出都是相同的。

<强>结果:

t1 =
0.0461

t2 =
0.0491

t3 =
0.0740

t4 =
7.5249

>> t4/t3
ans =
101.6878

Method 3比<{1}}

大约快<100>
  

总结:   虽然内置函数的首选和不受欢迎的方向之间的差异是最小的,   匿名函数的差异变得很大。因此,将更大的维度用作维度1可能是最佳做法。