我目前正在使用以下功能将我的Alpha通道(存储为单独的GRAY cv :: Mats)应用于图像:
void percepUnit::applyAlpha() {
int x,y,w,h;
/*vector<cv::Mat> channels;
if (image.rows == mask.rows and image.cols == mask.cols) {
cv::split(image,channels); // break image into channels
channels.push_back(mask); // append alpha channel
cv::merge(channels,alphaImage); // combine channels
}*/
// Avoid merge
cv::Mat src[] = {this->image, this->mask};
int from_to[] = {0,0, 1,1, 2,2, 3,3};
this->alphaImage = Mat(image.rows, image.cols, CV_8UC4);
cv::mixChannels(src, 2, &(this->alphaImage), 1, from_to, 4); // &(*alphaImage)?
}
我必须将cv :: Mats的分辨率提高到1280x720(由于:How to replace an instance with another instance via pointer?),现在这个功能运行得非常慢,几乎占据了已经很重的装甲的50%细分应用程序。
有关如何更快地应用这些Alpha通道的任何建议?如果您有任何基于GPU的解决方案,我正在使用GPU运行OpenCV。)
答案 0 :(得分:2)
我最终在GPU上进行拆分/合并:
void percepUnit::applyAlpha() {
cv::gpu::GpuMat tmpImage, tmpMask, tmpAlphaImage;
std::vector<cv::gpu::GpuMat> channels;
tmpImage.upload(this->image);
tmpMask.upload(this->mask);
cv::gpu::split(tmpImage,channels); // break image into channels
channels.push_back(tmpMask); // append alpha channel
cv::gpu::merge(channels,tmpAlphaImage); // combine channels
tmpAlphaImage.download(this->alphaImage);
tmpAlphaImage.release();
tmpImage.release();
tmpMask.release();
channels[0].release();
channels[1].release();
channels[2].release();
}