iOS:使用Alpha屏幕外覆盖两个图像

时间:2016-02-24 13:30:50

标签: ios drawing alpha

抱歉,对于这个问题,我知道有一个类似的问题,但我无法得到工作的答案。可能是我身边有些愚蠢的错误; - )

我想在iOS上使用Alpha叠加两张图片。从两个视频拍摄的图像,由AssetReader读取并存储在两个CVPixelBuffer中。我知道Alpha通道没有存储在视频中,所以我从第三个文件中获取它。所有数据都很好。问题是叠加,如果我在屏幕上用[CIContext drawImage]做的一切都很好! 但如果我在屏幕外进行,因为视频的格式与屏幕格式不同,我无法让它工作: 1. drawImage确实有效,但只能在屏幕上显示 2. render:toCVPixelBuffer工作,但忽略Alpha 3. CGContextDrawImage似乎什么都不做(甚至不是错误信息)

有人可以让我知道出了什么问题:

初​​始化: ...(以前很多代码) 设置颜色空间和位图上下文

                   if(outputContext)
                    {
                        CGContextRelease(outputContext);
                        CGColorSpaceRelease(outputColorSpace);
                    }
                    outputColorSpace = CGColorSpaceCreateDeviceRGB();
                    outputContext =   CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pixelBuffer), videoFormatSize.width, videoFormatSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), outputColorSpace,(CGBitmapInfo) kCGBitmapByteOrderDefault |kCGImageAlphaPremultipliedFirst);

... (很多代码之后)

图:

CIImage *backImageFromSample;
CGImageRef frontImageFromSample;
CVImageBufferRef nextImageBuffer = myPixelBufferArray[0];
CMSampleBufferRef sampleBuffer = NULL;
CMSampleTimingInfo timingInfo;

//draw the frame
CGRect toRect;
toRect.origin.x = 0;
toRect.origin.y = 0;
toRect.size = videoFormatSize;

//background image always full size, this part seems to work
if(drawBack)
{
    CVPixelBufferLockBaseAddress( backImageBuffer,  kCVPixelBufferLock_ReadOnly );
    backImageFromSample = [CIImage imageWithCVPixelBuffer:backImageBuffer];
    [coreImageContext render:backImageFromSample toCVPixelBuffer:nextImageBuffer bounds:toRect colorSpace:rgbSpace];
    CVPixelBufferUnlockBaseAddress( backImageBuffer,  kCVPixelBufferLock_ReadOnly );
}
else
    [self clearBuffer:nextImageBuffer];
//Front image doesn't seem to do anything
if(drawFront)
{
    unsigned long int numBytes = CVPixelBufferGetBytesPerRow(frontImageBuffer)*CVPixelBufferGetHeight(frontImageBuffer);
    CVPixelBufferLockBaseAddress( frontImageBuffer,  kCVPixelBufferLock_ReadOnly );

    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, CVPixelBufferGetBaseAddress(frontImageBuffer), numBytes, NULL);
    frontImageFromSample = CGImageCreate (CVPixelBufferGetWidth(frontImageBuffer) , CVPixelBufferGetHeight(frontImageBuffer), 8, 32, CVPixelBufferGetBytesPerRow(frontImageBuffer), outputColorSpace, (CGBitmapInfo) kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst, provider, NULL, NO, kCGRenderingIntentDefault);
    CGContextDrawImage ( outputContext, inrect, frontImageFromSample);
    CVPixelBufferUnlockBaseAddress( frontImageBuffer, kCVPixelBufferLock_ReadOnly );
    CGImageRelease(frontImageFromSample);
}

任何想法?

1 个答案:

答案 0 :(得分:1)

所以显然我应该停下来询问有关stackflow的问题。每次我在经过数小时的调试后都这样做,我很快就会找到答案。对不起。问题在于初始化,你不能在没有首先锁定地址O_o的情况下执行CVPixelBufferGetBaseAddress。地址变为NULL,这似乎是允许的,然后动作就是不做任何事情。所以正确的代码是:

                if(outputContext)
                {
                    CGContextRelease(outputContext);
                    CGColorSpaceRelease(outputColorSpace);
                }
                CVPixelBufferLockBaseAddress(pixelBuffer);
                outputColorSpace = CGColorSpaceCreateDeviceRGB();
                outputContext =   CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pixelBuffer), videoFormatSize.width, videoFormatSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), outputColorSpace,(CGBitmapInfo) kCGBitmapByteOrderDefault |kCGImageAlphaPremultipliedFirst);
                CVPixelBufferUnlockBaseAddress(pixelBuffer);