我有两个数组。一个是n
p
,另一个是d
p
r
。我希望d
的输出为n
r
B
,我可以在构建下面的张量import numpy
X = numpy.array([[1,2,3],[3,4,5],[5,6,7],[7,8,9]]) # n x p
betas = numpy.array([[[1,2],[1,2],[1,2]], [[5,6],[5,6],[5,6]]]) # d x p x r
print X.shape
print betas.shape
B = numpy.zeros((betas.shape[0],X.shape[0],betas.shape[2]))
print B.shape
for i in range(B.shape[0]):
B[i,:,:] = numpy.dot(X, betas[i])
print "B",B
C = numpy.tensordot(X, betas, axes=([1],[0]))
print C.shape
时轻松实现。但是,我想在没有这个循环的情况下这样做。
C
我尝试过各种方式让B
与reshape
匹配,但到目前为止,我一直没有成功。有没有一种方法不涉及对import {Subject} from 'rxjs/Subject';
import {Observable} from "rxjs/Observable";
import {VenueAdminInceptionModel} from '../../models/venueadmininceptionmodel/venueadmin.inception.model';
export class VenueAdminInceptionService{
// pk and url service
private pkurlsend = new Subject<VenueAdminInceptionModel>();
sendurlpk(payload: VenueAdminInceptionModel){
this.pkurlsend.next(payload);
}
receiveurlpk(): Observable<VenueAdminInceptionModel>{
return this.pkurlsend.asObservable();
}
}
的调用?
答案 0 :(得分:1)
由于dot
规则是A的最后一个,而第二个到最后一个B&#39;,你可以X.dot(betas)
获得一个(n,d,r)数组(这是共享p
维度的总和。然后你需要一个转置来得到(d,n,r)
In [200]: X.dot(betas).transpose(1,0,2)
Out[200]:
array([[[ 6, 12],
[ 12, 24],
[ 18, 36],
[ 24, 48]],
[[ 30, 36],
[ 60, 72],
[ 90, 108],
[120, 144]]])
我们也可以直接从维度规范中编写einsum
版本:
np.einsum('np,dpr->dnr', X,betas)
matmul
也是如此(最后两个轴上的dot
,而d
出现的时候是X@betas
。
var sheet = SpreadsheetApp.getActive().getSheets()[0];
var range = sheet.getRange(1, 1, sheet.getLastRow(), 1);
var values = range.getValues();
var sortBy = {
"a": 2,
"b": 0,
"c": 4,
"d": 1,
"e": 3
};
values.sort(function(a, b){
return sortBy[[a][0]] - sortBy[[b][0]];
});
range.setValues(values);
- 如果任一参数是N-D,则N> 2,它被视为一堆 驻留在最后两个索引中的矩阵并相应地进行广播。
答案 1 :(得分:0)
我们可以使用np.tensordot
,然后需要置换轴 -
B = np.tensordot(betas, X, axes=(1,1)).swapaxes(1,2)
# Or np.tensordot(X, betas, axes=(1,1)).swapaxes(0,1)
答案 2 :(得分:0)
以下是使用 numpy.dot()
的另一种方法,按照您的要求返回视图,最重要的是比{{快4倍以上3}} 方法,特别是对于小型数组。但是,对于相当大的数组,tensordot比普通np.tensordot
快得多。见下面的时间安排。
In [108]: X.shape
Out[108]: (4, 3)
In [109]: betas.shape
Out[109]: (2, 3, 2)
# use `np.dot` and roll the second axis to first position
In [110]: dot_prod = np.rollaxis(np.dot(X, betas), 1)
In [111]: dot_prod.shape
Out[111]: (2, 4, 2)
# @Divakar's approach
In [113]: B = np.tensordot(betas, X, axes=(1,1)).swapaxes(1,2)
# sanity check :)
In [115]: np.all(np.equal(dot_prod, B))
Out[115]: True
现在,两种方法的表现都是:
# @Divakar's approach
In [117]: %timeit B = np.tensordot(betas, X, axes=(1,1)).swapaxes(1,2)
10.6 µs ± 2.1 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
# @hpaulj's approach
In [151]: %timeit esum_dot = np.einsum('np, dpr -> dnr', X, betas)
4.16 µs ± 235 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
# proposed approach: more than 4x faster!!
In [118]: %timeit dot_prod = np.rollaxis(np.dot(X, betas), 1)
2.47 µs ± 11.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
np.tensordot()
比np.tensordot()
In [129]: X = np.random.randint(1, 10, (600, 500))
In [130]: betas = np.random.randint(1, 7, (300, 500, 300))
In [131]: %timeit B = np.tensordot(betas, X, axes=(1,1)).swapaxes(1,2)
18.2 s ± 2.41 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [132]: %timeit dot_prod = np.rollaxis(np.dot(X, betas), 1)
52.8 s ± 14.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)