我试图实现以下等式:
在matlab中。要解释一些符号df/dt^(1)_{i,j}
应该是向量,z^{(2)}_{k2}
是实数,a^{(2)}_{i,j}
是实数,[t^{(2)}_{k2}]
是向量,x_i
是一个向量,t^{(1)}_{i,j}
是一个向量。有关符号的更多说明,请查看与此相关的math.stackexchange question。此外,我试图对代码进行大量评论,并对输入和输出应该是什么进行评论,以最大限度地减少对相关变量维度的混淆。
我确实有一个潜在的实现(我认为是正确的)但有时MATLAB有一些很好的隐藏技巧,并且想知道这是否是上面矢量化方程的一个很好的实现,或者是否有更好的实现。
目前这是我的代码:
function [ dJ_dt1 ] = compute_t1_gradient(t1,x,y,f,z_l1,z_l2,a_l2,c,t2,lambda)
%compute_t1_gradient_loops - computes the t1 parameter of a 2 layer HBF
% Computes dJ_dt1 according to:
% dJ_dt1
% Input:
% t1 = centers (Dp x Dd x Np)
% x = data (D x 1)
% y = label (1 x 1)
% f = f(x) (1 x 1)
% z_l1 = inputs l2 (Np x Dd)
% z_l2 = inputs l1 (K2 x 1)
% a_l2 = activations l2 (Np x Dd)
% a_l3 = activations l3 (K2 x 1)
% c = weights (K2 x 1)
% t2 = centers (K1 x K2)
% lambda = reg param (1 x 1)
% mu_c = step size (1 x 1)
% Output:
% dJ_dt1 = gradient (Dp x Dd x Np)
[Dp, ~, ~] = size(t1);
[Np, Dd] = size(a_l2);
x_parts = reshape(x, [Dp, Np])'; % Np x Dp
K1 = Np * Dd;
a_l2_col_vec = reshape(a_l2', [K1, 1]); %K1 x 1
alpha = bsxfun(@minus, a_l2_col_vec, t2); %K1 x K2
c_z_l2 = (c .* exp(-z_l2))'; % 1 x K2
alpha = bsxfun(@times, c_z_l2, alpha); %K1 x K2
alpha = bsxfun(@times, reshape(exp(-z_l1'),[K1, 1]) , alpha);
alpha = sum(alpha, 2); %K1 x 1
xi_t1 = bsxfun(@minus, x_parts', permute(t1, [1,3,2]));
% alpha K1 x 1
% xi_t1 Dp x Np x Dd
dJ_dt1 = bsxfun(@minus, reshape(alpha,[Dd, Np]), permute(xi_t1, [3, 2, 1]));
dJ_dt1 = permute(dJ_dt1,[3,1,2]);
dJ_dt1 = -4*(y-f)*dJ_dt1;
dJ_dt1 = dJ_dt1 + lambda * 0; %TODO
end
实际上,此时我决定再次将上述函数实现为for循环。不幸的是,他们没有产生相同的答案,这使我怀疑上述是正确的。我将粘贴我想要/想要矢量化的for循环代码:
function [ dJ_dt1 ] = compute_t1_gradient_loops(t1,x,y,f,z_l1,z_l2,a_l2,c,t2)
%compute_t1_gradient_loops - computes the t1 parameter of a 2 layer HBF
% Computes t1 according to:
% t1 := t1 - mu_c * dJ/dt1
% Input:
% t1 = centers (Dp x Dd x Np)
% x = data (D x 1)
% y = label (1 x 1)
% f = f(x) (1 x 1)
% z_l1 = inputs l2 (Np x Dd)
% z_l2 = inputs l1 (K2 x 1)
% a_l2 = activations l2 (Np x Dd)
% a_l3 = activations l3 (K2 x 1)
% c = weights (K2 x 1)
% t2 = centers (K1 x K2)
% lambda = reg param (1 x 1)
% mu_c = step size (1 x 1)
% Output:
% dJ_dt1 = gradeint (Dp x Dd x Np)
[Dp, ~, ~] = size(t1); %(Dp x Dd x Np)
[Np, Dd] = size(a_l2);
K2 = length(c);
t2_tensor = reshape(t2, Dd, Np, K2);
x_parts = reshape(x, [Dp, Np]);
dJ_dt1 = zeros(Dp, Dd, Np);
for i=1:Dd
xi = x_parts(:,i);
for j=1:Np
t_l1_ij = t1(:,i,j);
a_l2_ij = a_l2(j, i);
z_l1_ij = z_l1(j,i);
alpha_ij = 0;
for k2=1:K2
t2_k2ij = t2_tensor(i,j,k2);
c_k2 = c(k2);
z_l2_k2 = z_l2(k2);
new_delta = c_k2*-1*exp(-z_l2_k2)*2*(a_l2_ij - t2_k2ij);
alpha_ij = alpha_ij + new_delta;
end
alpha_ij = 2*(y-f)*-1*exp(-z_l1_ij)*2*(xi - t_l1_ij);
dJ_dt1(:,i,j) = alpha_ij;
end
end
end
我实际上甚至用Andrew Ng suggests方式近似导数来检查梯度下降,如下式:
为此我甚至为它编写了代码:
%% update t1 unit test
%% dimensions
Dp = 3;
Np = 4;
Dd = 2;
K2 = 5;
K1 = Dd * Np;
%% fake data & params
x = (1:Dp*Np)';
y = 3;
c = (1:K2)';
t2 = rand(K1, K2);
t1 = rand(Dp, Dd, Np);
lambda = 0;
mu_t1 = 1;
%% call f(x)
[f, z_l1, z_l2, a_l2, ~ ] = f_star(x,c,t1,t2,Np,Dp);
%% update gradient
dJ_dt1_ij_loops = compute_t1_gradient_loops(t1,x,y,f,z_l1,z_l2,a_l2,c,t2);
dJ_dt1 = compute_t1_gradient(t1,x,y,f,z_l1,z_l2,a_l2,c,t2,lambda);
eps = 1e-4;
e_111 = zeros( size(t1) );
e_111(1,1,1) = eps;
derivative = (J(y, x, c, t2, t1 + e_111, Np, Dp) - J(y, x, c, t2, t1 - e_111, Np, Dp) ) / (2*eps);
derivative
dJ_dt1_ij_loops(1,1,1)
dJ_dt1(1,1,1)
但似乎这两个衍生品都不同意“近似”的衍生品。一次运行的输出如下所示:
>> update_t1_gradient_unit_test
derivative =
0.0027
dJ_dt1_ij_loops
ans =
0.0177
dJ_dt1
ans =
-0.5182
>>
如果有错误,我不清楚......似乎它几乎与循环中的那个匹配,但是那个足够接近吗?
Andrew Ng说:
然而,我没有看到4位有效数字同意!甚至没有相同的数量级:(我猜两者都错了,但我似乎无法理解为什么或在哪里/如何。
在相关的说明中,我还要求检查我在顶部的衍生物是否实际上(数学上是正确的),因为此时我不确定哪个部分是错的,哪个部分是正确的。这个问题的链接在这里:
更新:
我已经使用循环实现了衍生版本的新版本,它几乎与我创建的一个小例子一致。
这是新的实现(某处有错误......):
function [ dJ_dt1 ] = compute_df_dt1_loops3(t1,x,z_l1,z_l2,a_l2,c,t2)
% Computes t1 according to:
% df/dt1
% Input:
% t1 = centers (Dp x Dd x Np)
% x = data (D x 1)
% z_l1 = inputs l2 (Np x Dd)
% z_l2 = inputs l1 (K2 x 1)
% a_l2 = activations l2 (Np x Dd)
% a_l3 = activations l3 (K2 x 1)
% c = weights (K2 x 1)
% t2 = centers (K1 x K2)
% Output:
% dJ_dt1 = gradeint (Dp x Dd x Np)
[Dp, Dd, Np] = size(t1); %(Dp x Dd x Np)
K2 = length(c);
x_parts = reshape(x, [Dp, Np]);
dJ_dt1 = zeros(Dp, Dd, Np);
for i=1:Np
xi_part = x_parts(:,i);
for j=1:Dd
z_l1_ij = z_l1(i,j);
a_l2_ij = a_l2(i,j);
t_l1_ij = t1(:,i,j);
alpha_ij = 0;
for k2=1:K2
ck2 = c(k2);
t2_k2 = t2(:, k2);
index = (i-1)*Dd + j;
t2_k2_ij = t2_k2(index);
z_l2_k2 = z_l2(k2);
new_delta = ck2*(exp(-z_l2_k2))*2*(a_l2_ij - t2_k2_ij);
alpha_ij = alpha_ij + new_delta;
end
alpha_ij = -1 * alpha_ij * exp(-z_l1_ij)*2*(xi_part - t_l1_ij);
dJ_dt1(:,i,j) = alpha_ij;
end
end
这里是计算数值导数的代码(正确且按预期工作):
function [ dJ_dt1_numerical ] = compute_numerical_derivatives( x, c, t1, t2, eps)
% Computes t1 according to:
% df/dt1 numerically
% Input:
% x = data (D x 1)
% c = weights (K2 x 1)
% t1 = centers (Dp x Dd x Np)
% t2 = centers (K1 x K2)
% Output:
% dJ_dt1 = gradeint (Dp x Dd x Np)
[Dp, Dd, Np] = size(t1);
dJ_dt1_numerical = zeros(Dp, Dd, Np);
for np=1:Np
for dd=1:Dd
for dp=1:Dp
e_dd_dp_np = zeros(Dp, Dd, Np);
e_dd_dp_np(dp,dd,np) = eps;
f_e1 = f_star_loops(x,c,t1+e_dd_dp_np,t2);
f_e2 = f_star_loops(x,c,t1-e_dd_dp_np,t2);
numerical_derivative = (f_e1 - f_e2)/(2*eps);
dJ_dt1_numerical(dp,dd,np) = numerical_derivative;
end
end
end
end
我将提供f的代码和我实际使用的数字,以防人们重现我的结果:
这里是f的代码(也是正确的并按预期工作):
function [ f, z_l1, z_l2, a_l2, a_l3 ] = f_star_loops( x, c, t1, t2)
%f_start - computes 2 layer HBF predictor
% Computes f^*(x) = sum_i c_i a^(3)_i
% Inputs:
% x = data point (D x 1)
% x = [x1, ..., x_np, ..., x_Np]
% c = weights (K2 x 1)
% t2 = centers (K1 x K2)
% t1 = centers (Dp x Dd x Np)
% Outputs:
% f = f^*(x) = sum_i c_i a^(3)_i
% a_l3 = activations l3 (K2 x 1)
% z_l2 = inputs l2 (K2 x 1)
% a_l2 = activations l2 (Np x Dd)
% z_l1 = inputs l1 (Np x Dd)
[Dp, Dd, Np] = size(t1);
z_l1 = zeros(Np, Dd);
a_l2 = zeros(Np, Dd);
x_parts = reshape(x, [Dp, Np]);
%% Compute components of 1st layer z_l1 and a_l1
for np=1:Np
x_np = x_parts(:,np);
t1_np = t1(:,:, np);
for dd=1:Dd
t1_np_dd = t1_np(:, dd);
z_l1_np_dd = norm(t1_np_dd - x_np, 2)^2;
a_l1_np_dd = exp(-z_l1_np_dd);
% a_l1_np_dd = -z_l1_np_dd;
% a_l1_np_dd = sin(-z_l1_np_dd);
% insert
a_l2(np, dd) = a_l1_np_dd;
z_l1(np, dd) = z_l1_np_dd;
end
end
%% Compute components of 2nd layer z_l2 and a_l2
K1 = Dd*Np;
K2 = length(c);
a_l2_vec = reshape(a_l2', [K1,1]);
z_l2 = zeros(K2, 1);
for k2=1:K2
t2_k2 = t2(:, k2); % K2 x 1
z_l2_k2 = norm(t2_k2 - a_l2_vec, 2)^2;
% insert
z_l2(k2) = z_l2_k2;
end
%% Output later 3rd layer
a_l3 = exp(-z_l2);
% a_l3 = -z_l2;
% a_l3 = sin(-z_l2);
f = c' * a_l3;
end
以下是我用于测试的数据:
%% Test 1:
% dimensions
disp('>>>>>>++++======--------> update t1 unit test');
% fake data & params
x = (1:6)'/norm(1:6,2)
c = [29, 30, 31, 32]'
t2 = [(13:16)/norm((13:16),2); (17:20)/norm((17:20),2); (21:24)/norm((21:24),2); (25:28)/norm((25:28),2)]'
Dp = 3;
Dd = 2;
Np = 2;
t1 = zeros(Dp,Dd, Np); % (Dp, Dd, Np)
t1(:,:,1) = [(1:3)/norm((1:3),2); (4:6)/norm((4:6),2)]';
t1(:,:,2) = [(7:9)/norm((7:9),2); (10:12)/norm((10:12),2)]';
t1
% call f(x)
[f, z_l1, z_l2, a_l2, a_l3 ] = f_star_loops(x,c,t1,t2)
% gradient
df_dt1_loops = compute_df_dt1_loops3(t1,x,z_l1,z_l2,a_l2,c,t2);
df_dt1_loops2 = compute_df_dt1_loops3(t1,x,z_l1,z_l2,a_l2,c,t2);
eps = 1e-10;
dJ_dt1_numerical = compute_numerical_derivatives( x, c, t1, t2, eps);
disp('---- Derivatives ----');
for np=1:Np
np
dJ_dt1_numerical_np = dJ_dt1_numerical(:,:,np);
dJ_dt1_numerical_np
df_dt1_loops2_np = df_dt1_loops(:,:,np);
df_dt1_loops2_np
end
请注意,数值导数现在是正确的(我确定因为我将mathematica返回的匹配值加上f
进行了调试,因此它可以按照我的意愿运行。)
以下是输出的示例(其中数值导数的矩阵应使用我的方程式与导数的矩阵匹配):
---- Derivatives ----
np =
1
dJ_dt1_numerical_np =
7.4924 13.1801
14.9851 13.5230
22.4777 13.8660
df_dt1_loops2_np =
7.4925 5.0190
14.9851 6.2737
22.4776 7.5285
np =
2
dJ_dt1_numerical_np =
11.4395 13.3836
6.9008 6.6363
2.3621 -0.1108
df_dt1_loops2_np =
14.9346 13.3835
13.6943 6.6363
12.4540 -0.1108
答案 0 :(得分:1)
更新:我对公式中某些数量的指数存在一些误解,另请参阅更新后的问题。我在下面留下了原始答案(因为矢量化应该以相同的方式进行),最后我添加了与OP的实际问题相对应的最终矢量化版本以便完整。
您的代码与公式之间存在一些不一致之处。在您的公式中,您引用了x_i
,但x
数组的相应大小是索引j
的大小。那么,这与你的math.stackexchange问题是一致的,其中i
和j
似乎与你在这里使用的符号相互交换......
无论如何,这是你的函数的固定循环版本:
function [ dJ_dt1 ] = compute_t1_gradient_loops(t1,x,y,f,z_l1,z_l2,a_l2,c,t2)
%compute_t1_gradient_loops - computes the t1 parameter of a 2 layer HBF
% Input:
% t1 = (Dp x Dd x Np)
% x = (D x 1)
% z_l1 = (Np x Dd)
% z_l2 = (K2 x 1)
% a_l2 = (Np x Dd)
% c = (K2 x 1)
% t2 = (K1 x K2)
%
% K1=Dd*Np
% D=Dp*Dd
% Dp,Np,Dd,K2 unique
%
% Output:
% dJ_dt1 = gradient (Dp x Dd x Np)
[Dp, ~, ~] = size(t1); %(Dp x Dd x Np)
[Np, Dd] = size(a_l2);
K2 = length(c);
t2_tensor = reshape(t2, Dd, Np, K2); %Dd x Np x K2
x_parts = reshape(x, [Dp, Dd]); %Dp x Dd
dJ_dt1 = zeros(Dp, Dd, Np); %Dp x Dd x Np
for i=1:Dd
xi = x_parts(:,i);
for j=1:Np
t_l1_ij = t1(:,i,j);
a_l2_ij = a_l2(j, i);
z_l1_ij = z_l1(j,i);
alpha_ij = 0;
for k2=1:K2
t2_k2ij = t2_tensor(i,j,k2);
c_k2 = c(k2);
z_l2_k2 = z_l2(k2);
new_delta = c_k2*exp(-z_l2_k2)*(a_l2_ij - t2_k2ij);
alpha_ij = alpha_ij + new_delta;
end
alpha_ij = -4*alpha_ij* exp(-z_l1_ij)*(xi - t_l1_ij);
dJ_dt1(:,i,j) = alpha_ij;
end
end
end
有些注意事项:
x
的大小更改为D=Dp*Dd
,以保留公式的i
索引。否则就必须重新考虑更多的事情。[Dp, ~, ~] = size(t1);
Dp=size(t1,1)
alpha_ij
,因为你用前因子覆盖旧值而不是乘以它如果我误解了你的意图,请告诉我,我会相应更改循环版本。
假设循环版本符合您的要求,这里是一个矢量化版本,类似于您原来的尝试:
function [ dJ_dt1 ] = compute_t1_gradient_vect(t1,x,y,f,z_l1,z_l2,a_l2,c,t2)
%compute_t1_gradient_vect - computes the t1 parameter of a 2 layer HBF
% Input:
% t1 = (Dp x Dd x Np)
% x = (D x 1)
% y = (1 x 1)
% f = (1 x 1)
% z_l1 = (Np x Dd)
% z_l2 = (K2 x 1)
% a_l2 = (Np x Dd)
% c = (K2 x 1)
% t2 = (K1 x K2)
%
% K1=Dd*Np
% D=Dp*Dd
% Dp,Np,Dd,K2 unique
%
% Output:
% dJ_dt1 = gradient (Dp x Dd x Np)
Dp = size(t1,1);
[Np, Dd] = size(a_l2);
K2 = length(c);
t2_tensor = reshape(t2, Dd, Np, K2); %Dd x Np x K2
x_parts = reshape(x, [Dp, Dd]); %Dp x Dd
%reorder things to align for bsxfun later
a_l2=a_l2'; %Dd x Np <-> i,j
z_l1=z_l1'; %Dd x Np <-> i,j
t2_tensor = permute(t2_tensor,[3 1 2]); %K2 x Dd x Np
%the 1D part of the sum to be used in partialsum
%prefactors also put here to minimize computational effort
tempvar_k2 = -4*c.*exp(-z_l2); % K2 x 1
%compute sum(b(k)*(c-d(k)) as c*sum(b(k))-sum(b(k)*d(k)) (NB)
partialsum = a_l2*sum(tempvar_k2) ...
-squeeze(sum(bsxfun(@times,tempvar_k2,t2_tensor),1)); %Dd x Np
%alternative computation by definition:
%partialsum = bsxfun(@minus,a_l2,t2_tensor); %Dd x Np x K2
%partialsum = permute(partialsum,[3 1 2]); %K2 x Dd x Np
%partialsum = squeeze(sum(bsxfun(@times,tempvar_k2,partialsum),1)); %Dd x Np
%last part of the formula, (x-t1)
tempvar_lastterm = bsxfun(@minus,x_parts,t1); %Dp x Dd x Np
tempvar_lastterm = permute(tempvar_lastterm,[2 3 1]); %Dd x Np x Dp
%put together what we have
dJ_dt1 = bsxfun(@times,partialsum.*exp(-z_l1),tempvar_lastterm); %Dd x Np x Dp
dJ_dt1 = permute(dJ_dt1,[3 1 2]); %Dp x Dd x Np
同样,有些事情需要注意:
k2
依赖部分定义了一个临时变量,因为它在下一步中使用了两次。-4
附加到此变量,因为您只需要乘以K2
次而不是Dp*Dd*Np
次,这可能会对大型矩阵产生影响。< / LI>
k2
分成两个总和来计算(a-t2)
总和,请参阅以(NB)
结尾的注释。事实证明,对于大型矩阵(将您的漂亮测试案例与2-3-4-5的变暗乘以100),这种分离会带来相当大的加速。当然,如果K2
远大于t2
的内部维度,那么你就会失去这个技巧。x_j
而不是x_i
,则必须相应地调整维度。我检查了两个测试用例的循环和两个矢量化版本。首先,你原来的
示例%% update t1 unit test
%% dimensions
Dp = 3;
Np = 4;
Dd = 2;
K2 = 5;
K1 = Dd * Np;
%% fake data & params
x = (1:Dp*Dd)';
y = 3;
c = (1:K2)';
t2 = rand(K1, K2);
t1 = rand(Dp, Dd, Np);
%% update gradient
dJ_dt1_ij_loops = compute_t1_gradient_loops(t1,x,y,f,z_l1,z_l2,a_l2,c,t2);
dJ_dt1_vect = compute_t1_gradient_vect(t1,x,y,f,z_l1,z_l2,a_l2,c,t2);
dJ_dt1_vect2 = compute_t1_gradient_vect2(t1,x,y,f,z_l1,z_l2,a_l2,c,t2);
请注意,我再次更改了x
的定义,..._vect2
代表&#34;天真&#34;矢量化代码的版本。事实证明,得到的导数对于循环版本和天真矢量化同意完全,而它们与优化的矢量版本之间存在最大2e-14
差异。这意味着我们很好。并且机器精度附近的差异仅仅是由于计算以不同的顺序执行。
为了衡量效果,我将原始测试用例的维度乘以100:
%% dimensions
Dp = 300;
Np = 400;
Dd = 200;
K2 = 500;
K1 = Dd * Np;
我还设置变量以在每次函数调用之前和之后检查cputime
(因为tic/toc
仅测量挂钟时间)。测量的时间为23秒,2秒和4秒,用于循环,优化和天真的&#34;矢量版,分别。另一方面,后两个衍生物之间的最大差异现在是1.8e-5
。当然,我们的测试数据是随机的,这不是最好的条件数据。可能在实际应用中,这种差异不会成为问题,但是你应该始终注意精度的损失(我们在优化版本中特别减去两个可能很大的数字)。
您当然可以尝试将公式划分为计算它的术语,可能会有更有效的方法。它也可能都取决于数组的大小。
你提到你试图从定义中估算导数,基本上是使用对称导数。你没有得到你期望的,可能是因为你原来的功能的缺点。但是,我也想在这里注意一些事情。您epsilon
- 版本与原始尝试不符的事实可能是由于
J
的衍生物(我知道你正试图在math.SE上调试这个案例)J
函数中的如果一切都结束了,你仍然可能有纯粹的数学分歧来源:你使用epsilon=1e-4
的因素是完全随意的。当你以这种方式检查你的导数时,你基本上将你的函数线性化为给定点。如果您的函数在半径epsilon
的邻域中变化太大(即太非线性),则与精确值相比,您的对称导数将是不准确的。在进行这些检查时,您应该小心在导数中使用足够小的参数:小到足以预期函数的线性行为,但又足够大以避免1/epsilon
因子产生的数字噪声。
最后注意:您应该避免在matlab中命名变量eps
,因为它是一个内置函数,告诉您机器epsilon&#34; (查看help eps
),默认情况下对应于数字1
的精度(即没有输入参数)。如果您有一个名为1i
的变量,则可以调用复杂单元i
,但如果可能的话,可能更安全地避免使用内置名称。
更新了最终的矢量化版本,以对应更新的OP问题:
function [ dJ_dt1 tempout] = compute_t1_gradient_vect(t1,x,z_l1,z_l2,a_l2,c,t2)
%compute_t1_gradient_vect - computes the t1 parameter of a 2 layer HBF
% Input:
% t1 = (Dp x Dd x Np)
% x = (D x 1)
% z_l1 = (Np x Dd)
% z_l2 = (K2 x 1)
% a_l2 = (Np x Dd)
% c = (K2 x 1)
% t2 = (K1 x K2)
%
% K1=Dd*Np
% D=Dp*Np
% Dp,Np,Dd,K2 unique
%
% Output:
% dJ_dt1 = gradient (Dp x Dd x Np)
Dp = size(t1,1);
[Np, Dd] = size(a_l2);
K2 = length(c);
t2_tensor = reshape(t2, Dd, Np, K2); %Dd x Np x K2
x_parts = reshape(x, [Dp, Np]); %Dp x Np
t1 = permute(t1,[1 3 2]); %Dp x Np x Dd
a_l2=a_l2'; %Dd x Np <-> j,i
z_l1=z_l1'; %Dd x Np <-> j,i
tempvar_k2 = -4*c.*exp(-z_l2); % K2 x 1
partialsum = bsxfun(@minus,a_l2,t2_tensor); %Dd x Np x K2
partialsum = permute(partialsum,[3 1 2]); %K2 x Dd x Np
partialsum = squeeze(sum(bsxfun(@times,tempvar_k2,partialsum),1)); %Dd x Np
tempvar_lastterm = bsxfun(@minus,x_parts,t1); %Dp x Np x Dd
tempvar_lastterm = permute(tempvar_lastterm,[3 2 1]); %Dd x Np x Dp
dJ_dt1 = bsxfun(@times,partialsum.*exp(-z_l1),tempvar_lastterm); %Dd x Np x Dp
tempout=tempvar_lastterm;
dJ_dt1 = permute(dJ_dt1,[3 1 2]); %Dp x Dd x Np
请注意,这几乎与原始矢量化版本完全相同,只更改了x
的尺寸,并且某些索引已被置换。