我想编写一个简单的代码,根据数据的输入向量进行一些计算。它应该只返回一个值。我不知道如何实现这一目标。我写了一个简单的测试来检查它是如何工作的,我得到一个编译错误。这是代码:
Float Subset::parallel_tests()
{
float sum = 0.0f;
concurrency::parallel_for_each(concurrency::extent<1>(121), [=, &sum] (concurrency::index<1> idx) restrict(amp)
{
sum += 0.2f;
});
return sum;
}
当我尝试编译此代码时,出现以下错误:
错误C3590:'sum':如果lambda是放大器限制,则不支持by-reference capture或'this'捕获 错误C3581:'cci :: Subset :: parallel_tests ::':放大器限制代码中不支持的类型
答案 0 :(得分:1)
您的代码无法编译的原因是因为sum
在您的类中声明而未包含在array_view
中。基本上,您尝试从AMP限制代码访问this->sum
。在将sum
传递到parallel_for_each
之前,您需要使用以下内容将avSum
包裹起来,然后使用int sum = 0;
array_view<int, 1> avSum(1, &sum);
。
sum
您还需要使用原子操作来跨多个线程增加stride
的值,这在很大程度上抵消了GPU提供的并行性。这不是正确的方法。
<强>减少强>
我认为你想要实现的是减少。您试图将输入数组中的所有值相加并返回单个结果。这是GPU编程中记录良好的问题。 NVidia已经制作了几篇白皮书。 The C++ AMP Book也详细介绍了这一点。
这是最简单的实现方式。它不使用平铺,效率相对较低但易于理解。 stride = 4: a[0] += a[4]; a[1] += a[5]; a[2] += a[6]; a[3] += a[7]
stride = 2: a[0] += a[2]; a[1] += a[1];
循环的每次迭代都会添加数组的连续元素,直到最终结果位于元素0中。对于包含8个元素的数组:
class SimpleReduction
{
public:
int Reduce(accelerator_view& view, const std::vector<int>& source,
double& computeTime) const
{
assert(source.size() <= UINT_MAX);
int elementCount = static_cast<int>(source.size());
// Copy data
array<int, 1> a(elementCount, source.cbegin(), source.cend(), view);
std::vector<int> result(1);
int tailResult = (elementCount % 2) ? source[elementCount - 1] : 0;
array_view<int, 1> tailResultView(1, &tailResult);
for (int stride = (elementCount / 2); stride > 0; stride /= 2)
{
parallel_for_each(view, extent<1>(stride), [=, &a] (index<1> idx)
restrict(amp)
{
a[idx] += a[idx + stride];
// If there are an odd number of elements then the
// first thread adds the last element.
if ((idx[0] == 0) && (stride & 0x1) && (stride != 1))
tailResultView[idx] += a[stride - 1];
});
}
// Only copy out the first element in the array as this
// contains the final answer.
copy(a.section(0, 1), result.begin());
tailResultView.synchronize();
return result[0] + tailResult;
}
};
零元素现在包含总数。
template <int TileSize>
class TiledReduction
{
public:
int Reduce(accelerator_view& view, const std::vector<int>& source,
double& computeTime) const
{
int elementCount = static_cast<int>(source.size());
// Copy data
array<int, 1> arr(elementCount, source.cbegin(), source.cend(), view);
int result;
computeTime = TimeFunc(view, [&]()
{
while (elementCount >= TileSize)
{
extent<1> e(elementCount);
array<int, 1> tmpArr(elementCount / TileSize);
parallel_for_each(view, e.tile<TileSize>(),
[=, &arr, &tmpArr] (tiled_index<TileSize> tidx) restrict(amp)
{
// For each tile do the reduction on the first thread of the tile.
// This isn't expected to be very efficient as all the other
// threads in the tile are idle.
if (tidx.local[0] == 0)
{
int tid = tidx.global[0];
int tempResult = arr[tid];
for (int i = 1; i < TileSize; ++i)
tempResult += arr[tid + i];
// Take the result from each tile and create a new array.
// This will be used in the next iteration. Use temporary
// array to avoid race condition.
tmpArr[tidx.tile[0]] = tempResult;
}
});
elementCount /= TileSize;
std::swap(tmpArr, arr);
}
// Copy the final results from each tile to the CPU and accumulate them
std::vector<int> partialResult(elementCount);
copy(arr.section(0, elementCount), partialResult.begin());
result = std::accumulate(partialResult.cbegin(), partialResult.cend(), 0);
});
return result;
}
};
您可以平铺图块中的每个线程负责为其元素生成结果,然后将所有图块的结果相加。
{{1}}
这仍然不是最有效的解决方案,因为它没有良好的内存访问模式。您可以在本书的Codeplex网站上看到有关此问题的进一步改进。
答案 1 :(得分:0)
好的,我开始实施减少。我开始简单的减少,我遇到了一个问题。我不想将std :: vector传递给函数,而是传递一个或两个并发::数组。
我需要从源获取信息并平行地对所有内容求和,以便返回值。我该如何实施呢?
天真版本中的代码应该与此类似:
float Subset::reduction_simple_1(const concurrency::array<float, 1>& source)
{
assert(source.size() <= UINT_MAX);
//unsigned element_count = static_cast<unsigned>(source.size());
unsigned element_count = 121;
assert(element_count != 0); // Cannot reduce an empty sequence.
if (element_count == 1)
{
return source[0];
}
// Using array, as we mostly need just temporary memory to store
// the algorithm state between iterations and in the end we have to copy
// back only the first element.
//concurrency::array<float, 1> a(element_count, source.begin());
// Takes care of odd input elements – we could completely avoid tail sum
// if we would require source to have even number of elements.
float tail_sum = (element_count % 2) ? source[element_count - 1] : 0;
concurrency::array_view<float, 1> av_tail_sum(1, &tail_sum);
// Each thread reduces two elements.
for (unsigned s = element_count / 2; s > 0; s /= 2)
{
concurrency::parallel_for_each(concurrency::extent<1>(s), [=, &a] (concurrency::index<1> idx) restrict(amp)
{
//get information from source, do some computations and store it in accumulator
accumulator[idx] = accumulator[idx] + accumulator[idx + s];
// Reduce the tail in cases where the number of elements is odd.
if ((idx[0] == s - 1) && (s & 0x1) && (s != 1))
{
av_tail_sum[0] += accumulator[s - 1];
}
});
}
// Copy the results back to CPU.
std::vector<float> result(1);
copy(accumulator.section(0, 1), result.begin());
av_tail_sum.synchronize();
return result[0] + tail_sum;
}
我需要以某种方式实现“累加器”,但我不知道如何。
答案 2 :(得分:0)
//The method should compute a correlation value of two images (which had already been copied to GPU memory)
float Subset::compute_correlation(const concurrency::array<float, 1>& source1, const concurrency::array<float, 1>& source2)
{
float result;
float parameter_1;
float parameter_2;
.
.
.
float parameter_n;
parrallel_for_each(...)
{
//here do some computations using source1 and source2
parameter_1 = source1[idx] o source2[idx];
.
.
.
//I am computing every parameter in different way
parameter_n = source1[idx] o source2[idx];
}
//compute the result based on the parameters
result = parameter_1 o parameter_2 o ... o parameter_n;
return result;
}