我正在尝试使用Thrust和CUBLAS库在GPU上运行神经网络,但是我遇到了很多麻烦,让它比我们当前的多线程和矢量化CPU实现运行得更快。网络有一个带有逻辑单元的隐藏层和一个带有线性单元的输出层,下面是代码:
// Functor to add bias before computing logistic
template <typename T>
struct bias_logistic_f {
__host__ __device__
T operator()(const T& x, const T& y) const {
return 1/(1+exp(-(x+y)));
}
};
bias_logistic_f bias_logistic();
// Thrust vectors for input/hidden/output units
thrust::device_vector<FLT> batch(batch_rows*ndim);
thrust::device_vector<FLT> hid(batch_rows*nhid);
thrust::device_vector<FLT> gpu_code(ndata*ncode);
// ...Load data and network weights...
// Multiply input (batch) by weights (vis2hid)
// Our matrices are stored row-major, but BLAS wants column-major,
// so pretend they're transposed and compute hid' = vis2hid' * batch'
cublasDgemm(handle, CUBLAS_OP_N, CUBLAS_OP_N, nhid, batch_rows, ndim,
&alpha, thrust::raw_pointer_cast(&vis2hid[0]), nhid,
thrust::raw_pointer_cast(&batch[0]), ndim,
&beta, thrust::raw_pointer_cast(&hid[0]), nhid);
// Add hidbiases to hid and compute logistic
thrust::transform(hid.begin(), hid.end(), hidbiases.begin(), hid.begin(),
bias_logistic);
// Multiply hid by weights (hid2code)
cublasDgemm(handle, CUBLAS_OP_N, CUBLAS_OP_N, ncode, batch_rows, nhid,
&alpha, thrust::raw_pointer_cast(&hid2code[0]), ncode,
thrust::raw_pointer_cast(&hid[0]), nhid,
&beta, thrust::raw_pointer_cast(&gpu_code[b*batch_rows*ncode]), ncode);
// Add codebiases
thrust::transform(gpu_code.begin() + b*batch_rows*ncode, gpu_code.begin() + (b+1)*batch_rows*ncode,
codebiases.begin(), gpu_code.begin() + b*batch_rows*ncode,
thrust::plus<FLT>());
我们的输入数据是一个稀疏矩阵,大约有150,000行和6,500列,平均每行大约有100个非零元素。这太大了,不能将整个矩阵作为密集矩阵存储在GPU上,所以我所做的就是遍历稀疏矩阵,扩展每批1,000行,输入神经网络:
for(int b=0; b<nbatch; ++b) {
// Zero out batch b
thrust::fill(batch.begin(), batch.end(), 0.0f);
// batch_val contains the non-zero values for the current batch, batch_idx the indices within the batch,
// and batch_ptr indexes into batch_val/batch_idx
// This is like CSR format except instead of compressing rows, it's compressing submatrices of 1,000 rows
thrust::scatter(batch_val.begin() + batch_ptr[b],
batch_val.begin() + batch_ptr[b+1],
batch_idx.begin() + batch_ptr[b],
batch.begin());
// ...Input batch to network (shown above)...
}
我们的CPU实现使用STL向量执行相同的操作。当我运行两者并比较它们的运行时间时,我惊讶地发现GPU代码平均需要大约38秒来处理我们的数据,而CPU代码只需要大约27秒。可能有些差异是因为GPU已经存在了几年(特斯拉C1060),而服务器则是更新的24核机器。但我仍然认为有数千个线程可用,它最终会慢50%。
如何让这段代码运行得更快?我是GPU编程的新手,所以我不知道自己可能做错了什么。有没有比我在这里做的更有效的处理稀疏矩阵的方法,比如使用CUSPARSE库?或者更好的想法是完全忘记高级库并在CUDA中编写我自己的内核来组合矩阵乘法/逻辑/加法步骤?