我正在尝试使用GPUImage实现重映射过滤器。它类似于opencv重映射函数,它接受输入图像,xmap和ymap。所以,我将GPUImageThreeInputFilter子类化并编写了自己的着色器代码。当滤镜的输入是静止图像时,我得到了正确的输出图像。代码如下:
GPUImageRemap *remapFilter=[[GPUImageRemap alloc] init];
[remapFilter forceProcessingAtSize:CGSizeMake(sphericalImageW, sphericalImageH)];
UIImage *inputImage = [UIImage imageNamed:@"test.jpg"];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];
[stillImageSource addTarget:remapFilter atTextureLocation:0];
GPUImagePicture *stillImageSource1 = [[GPUImagePicture alloc] initWithImage:xmapImage];
[stillImageSource1 processImage];
[stillImageSource1 addTarget:remapFilter atTextureLocation:1];
GPUImagePicture *stillImageSource2 = [[GPUImagePicture alloc] initWithImage:ymapImage];
[stillImageSource2 processImage];
[stillImageSource2 addTarget:remapFilter atTextureLocation:2];
[stillImageSource processImage];
UIImage *filteredImage=[remapFilter imageFromCurrentlyProcessedOutput];
但是,当输入切换到摄像机输入时,输出图像出错了。我做了一些调试,发现xmap和ymap没有加载到第二和第三个纹理。这两个纹理的像素值都是0.
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPresetHigh cameraPosition:AVCaptureDevicePositionFront];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageRemap *remapFilter=[[GPUImageRemap alloc] init];
[remapFilter forceProcessingAtSize:CGSizeMake(sphericalImageW, sphericalImageH)];
[videoCamera addTarget:remapFilter atTextureLocation:0];
GPUImagePicture *stillImageSource1 = [[GPUImagePicture alloc] initWithImage:xmapImage];
[stillImageSource1 processImage];
[stillImageSource1 addTarget:remapFilter atTextureLocation:1];
GPUImagePicture *stillImageSource2 = [[GPUImagePicture alloc] initWithImage:ymapImage];
[stillImageSource2 processImage];
[stillImageSource2 addTarget:remapFilter atTextureLocation:2];
GPUImageView *camView = [[GPUImageView alloc] initWithFrame:self.view.bounds];
[remapFilter addTarget:camView];
[videoCamera startCameraCapture];
标题文件:
#import <GPUImage.h>
#import <GPUImageThreeInputFilter.h>
@interface GPUImageRemap : GPUImageThreeInputFilter
{
}
主文件:
#import "GPUImageRemap.h"
NSString *const kGPUImageRemapFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
varying highp vec2 textureCoordinate3;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform sampler2D inputImageTexture3;
/*
x, y map orignally store floating point numbers in [0 imageWidth] and [0 imageHeight]
then they are divided by imageWidth-1 and imageHeight-1 to be in [0 1]
then they are converted to integer by multiply 1000000
then an integer is put in the 4 byte of RGBA channel
then each unsigned byte RGBA component is clamped to [0 1] and passed to fragment shader
therefore, do the inverse in fragment shader to get original x, y coordinates
*/
void main()
{
highp vec4 xAry0_1 = texture2D(inputImageTexture2, textureCoordinate2);
highp vec4 xAry0_255=floor(xAry0_1*vec4(255.0)+vec4(0.5));
//largest integer number we may see will not exceed 2000000, so 3 bytes are enough to carry our integer values
highp float xint=xAry0_255.b*exp2(16.0)+xAry0_255.g*exp2(8.0)+xAry0_255.r;
highp float x=xint/1000000.0;
highp vec4 yAry0_1 = texture2D(inputImageTexture3, textureCoordinate3);
highp vec4 yAry0_255=floor(yAry0_1*vec4(255.0)+vec4(0.5));
highp float yint=yAry0_255.b*exp2(16.0)+yAry0_255.g*exp2(8.0)+yAry0_255.r;
highp float y=yint/1000000.0;
if (x<0.0 || x>1.0 || y<0.0 || y>1.0)
{
gl_FragColor = vec4(0,0,0,1);
}
else
{
highp vec2 imgTexCoord=vec2(y, x);
gl_FragColor = texture2D(inputImageTexture, imgTexCoord);
}
}
);
@implementation GPUImageRemap
- (id)init
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageRemapFragmentShaderString]))
{
return nil;
}
return self;
}
答案 0 :(得分:2)
我找到了自己的答案。 GPUImagePicture不能声明为局部变量。否则,它会在函数退出后释放。这就是上传到GPU时全部为0的原因。所有GPUImagePicture变量都需要是全局的。