哪种类型的ctype指针传递给NI IMAQ的imgBayerColorDecode?

时间:2013-06-13 19:23:37

标签: python ctypes labview

我正在使用ctypes从National Instruments(NI-IMAQ)访问图像采集API。在其中,有一个名为imgBayerColorDecode()的函数,我正在使用从imgSnap()函数返回的拜耳编码图像。我想将解码后的输出(即RGB图像)与我将根据原始数据创建的一些numpy ndarray进行比较,这就是imgSnap返回的内容。

然而,有两个问题。

第一个很简单:将imgSnap返回的imgbuffer传递给numpy数组。现在首先有一个问题:如果您的计算机是64位且RAM超过3GB,则无法使用numpy创建数组并将其作为指向imgSnap的指针传递。这就是为什么你必须实现一个解决方法,在NI的论坛(NI ref - first 2 posts)中描述:禁用错误消息(下面的代码中的第125行:imaq.niimaquDisable32bitPhysMemLimitEnforcement)并确保它是IMAQ库这会创建图像所需的内存(imaq.imgCreateBuffer)。之后,this recipe on SO应该能够再次将缓冲区转换为numpy数组。但我不确定我是否对数据类型进行了正确的更改:相机具有1020x1368像素,每个像素强度以10位精度记录。它通过CameraLink返回图像,我假设它以每像素2个字节为单位,以便于数据传输。这是否意味着我必须调整其他SO问题中给出的配方:

buffer = numpy.core.multiarray.int_asbuffer(ctypes.addressof(y.contents), 8*array_length)
a = numpy.frombuffer(buffer, float)

到此:

bufsize = 1020*1368*2
buffer = numpy.core.multiarray.int_asbuffer(ctypes.addressof(y.contents), bufsize)
a = numpy.frombuffer(buffer, numpy.int16)

第二个问题是imgBayerColorDecode()没有给我一个我期待的输出。 下面是2张图片,第一张是imgSnap的输出,用imgSessionSaveBufferEx()保存。第二个是imgSnap在经过imgBayerColorDecode()去马赛克后的输出。

  • 原始数据:i42.tinypic.com/znpr38.jpg
  • 拜耳解码:i39.tinypic.com/n12nmq.jpg

正如您所看到的,拜耳解码图像仍然是灰度图像,而且它与原始图像不相似(这里的小注释,图像被缩放以使用imagemagick上传)。原始图像是在一些掩模前面用红色滤镜拍摄的。从它(和其他2个滤色镜),我知道拜耳滤色镜在左上角看起来像这样:

BGBG
GRGR

我相信我在将正确类型的指针传递给imgBayerDecode时做错了,我的代码附在下面。

#!/usr/bin/env python
from __future__ import division

import ctypes as C
import ctypes.util as Cutil
import time


# useful references:
# location of the niimaq.h: C:\Program Files (x86)\National Instruments\NI-IMAQ\Include
# location of the camera files: C:\Users\Public\Documents\National Instruments\NI-IMAQ\Data
# check it C:\Users\Public\Documents\National Instruments\NI-IMAQ\Examples\MSVC\Color\BayerDecode

class IMAQError(Exception):
    """A class for errors produced during the calling of National Intrument's IMAQ functions.
    It will also produce the textual error message that corresponds to a specific code."""

    def __init__(self, code):
        self.code = code
        text = C.c_char_p('')
        imaq.imgShowError(code, text)
        self.message = "{}: {}".format(self.code, text.value)
        # Call the base class constructor with the parameters it needs
        Exception.__init__(self, self.message)


def imaq_error_handler(code):
    """Print the textual error message that is associated with the error code."""

    if code < 0:
        raise IMAQError(code)
        free_associated_resources = 1
        imaq.imgSessionStopAcquisition(sid)
        imaq.imgClose(sid, free_associated_resources)
        imaq.imgClose(iid, free_associated_resources)
    else:
        return code

if __name__ == '__main__':
    imaqlib_path = Cutil.find_library('imaq')
    imaq = C.windll.LoadLibrary(imaqlib_path)


    imaq_function_list = [  # this is not an exhaustive list, merely the ones used in this program
        imaq.imgGetAttribute,
        imaq.imgInterfaceOpen,
    imaq.imgSessionOpen,
        imaq.niimaquDisable32bitPhysMemLimitEnforcement,  # because we're running on a 64-bit machine with over 3GB of RAM
        imaq.imgCreateBufList,
        imaq.imgCreateBuffer,
        imaq.imgSetBufferElement,
        imaq.imgSnap,
        imaq.imgSessionSaveBufferEx,
        imaq.imgSessionStopAcquisition,
        imaq.imgClose,
        imaq.imgCalculateBayerColorLUT,
        imaq.imgBayerColorDecode ]

    # for all imaq functions we're going to call, we should specify that if they
    # produce an error (a number), we want to see the error message (textually)
    for func in imaq_function_list:
        func.restype = imaq_error_handler




    INTERFACE_ID = C.c_uint32
    SESSION_ID = C.c_uint32
    BUFLIST_ID = C.c_uint32
    iid = INTERFACE_ID(0)
    sid = SESSION_ID(0)
    bid = BUFLIST_ID(0)
    array_16bit = 2**16 * C.c_uint32
    redLUT, greenLUT, blueLUT  = [ array_16bit() for _ in range(3) ]
    red_gain, blue_gain, green_gain = [ C.c_double(val) for val in (1., 1., 1.) ]

    # OPEN A COMMUNICATION CHANNEL WITH THE CAMERA
    # our camera has been given its proper name in Measurement & Automation Explorer (MAX)
    lcp_cam = 'JAI CV-M7+CL'
    imaq.imgInterfaceOpen(lcp_cam, C.byref(iid))
    imaq.imgSessionOpen(iid, C.byref(sid)); 

    # START C MACROS DEFINITIONS
    # define some C preprocessor macros (these are all defined in the niimaq.h file)
    _IMG_BASE = 0x3FF60000

    IMG_BUFF_ADDRESS = _IMG_BASE + 0x007E  # void *
    IMG_BUFF_COMMAND = _IMG_BASE + 0x007F  # uInt32
    IMG_BUFF_SIZE = _IMG_BASE + 0x0082  #uInt32
    IMG_CMD_STOP = 0x08  # single shot acquisition

    IMG_ATTR_ROI_WIDTH = _IMG_BASE + 0x01A6
    IMG_ATTR_ROI_HEIGHT = _IMG_BASE + 0x01A7
    IMG_ATTR_BYTESPERPIXEL = _IMG_BASE + 0x0067  
    IMG_ATTR_COLOR = _IMG_BASE + 0x0003  # true = supports color
    IMG_ATTR_PIXDEPTH = _IMG_BASE + 0x0002  # pix depth in bits
    IMG_ATTR_BITSPERPIXEL = _IMG_BASE + 0x0066 # aka the bit depth

    IMG_BAYER_PATTERN_GBGB_RGRG = 0
    IMG_BAYER_PATTERN_GRGR_BGBG = 1
    IMG_BAYER_PATTERN_BGBG_GRGR = 2
    IMG_BAYER_PATTERN_RGRG_GBGB = 3
    # END C MACROS DEFINITIONS

    width, height = C.c_uint32(), C.c_uint32()
    has_color, pixdepth, bitsperpixel, bytes_per_pixel = [ C.c_uint8() for _ in range(4) ]

    # poll the camera (or is it the camera file (icd)?) for these attributes and store them in the variables
    for var, macro in [ (width, IMG_ATTR_ROI_WIDTH), 
                        (height, IMG_ATTR_ROI_HEIGHT),
                        (bytes_per_pixel, IMG_ATTR_BYTESPERPIXEL),
                        (pixdepth, IMG_ATTR_PIXDEPTH),
                        (has_color, IMG_ATTR_COLOR),
                        (bitsperpixel, IMG_ATTR_BITSPERPIXEL) ]:
        imaq.imgGetAttribute(sid, macro, C.byref(var))  


    print("Image ROI size: {} x {}".format(width.value, height.value))
    print("Pixel depth: {}\nBits per pixel: {} -> {} bytes per pixel".format(
        pixdepth.value, 
        bitsperpixel.value, 
        bytes_per_pixel.value))

    bufsize = width.value*height.value*bytes_per_pixel.value
    imaq.niimaquDisable32bitPhysMemLimitEnforcement(sid)

    # create the buffer (in a list)
    imaq.imgCreateBufList(1, C.byref(bid))  # Creates a buffer list with one buffer

    # CONFIGURE THE PROPERTIES OF THE BUFFER
    imgbuffer = C.POINTER(C.c_uint16)()  # create a null pointer
    RGBbuffer = C.POINTER(C.c_uint32)()  # placeholder for the Bayer decoded imgbuffer (i.e. demosaiced imgbuffer)
    imaq.imgCreateBuffer(sid, 0, bufsize, C.byref(imgbuffer))  # allocate memory (the buffer) on the host machine (param2==0)
    imaq.imgCreateBuffer(sid, 0, width.value*height.value * 4, C.byref(RGBbuffer))

    imaq.imgSetBufferElement(bid, 0, IMG_BUFF_ADDRESS, C.cast(imgbuffer, C.POINTER(C.c_uint32)))  # my guess is that the cast to an uint32 is necessary to prevent 64-bit callable memory addresses
    imaq.imgSetBufferElement(bid, 0, IMG_BUFF_SIZE, bufsize)
    imaq.imgSetBufferElement(bid, 0, IMG_BUFF_COMMAND, IMG_CMD_STOP)

    # CALCULATE THE LOOKUP TABLES TO CONVERT THE BAYER ENCODED IMAGE TO RGB (=DEMOSAICING)
    imaq.imgCalculateBayerColorLUT(red_gain, green_gain, blue_gain, redLUT, greenLUT, blueLUT, bitsperpixel)


    # CAPTURE THE RAW DATA 

    imgbuffer_vpp = C.cast(C.byref(imgbuffer), C.POINTER(C.c_void_p))
    imaq.imgSnap(sid, imgbuffer_vpp)
    #imaq.imgSnap(sid, imgbuffer)  # <- doesn't work (img produced is entirely black). The above 2 lines are required
    imaq.imgSessionSaveBufferEx(sid, imgbuffer,"bayer_mosaic.png")
    print('1 taken')


    imaq.imgBayerColorDecode(RGBbuffer, imgbuffer, height, width, width, width, redLUT, greenLUT, blueLUT, IMG_BAYER_PATTERN_BGBG_GRGR, bitsperpixel, 0) 
    imaq.imgSessionSaveBufferEx(sid,RGBbuffer,"snapshot_decoded.png");

    free_associated_resources = 1
    imaq.imgSessionStopAcquisition(sid)
    imaq.imgClose(sid, free_associated_resources )
    imaq.imgClose(iid, free_associated_resources )
    print "Finished"

跟进:在与NI representative讨论后,我确信第二个问题是由于imgBayerColorDecode在2012年发布之前仅限于8位输入图像(我们正在研究2010年)。但是,我想证实这一点:如果我将10位图像转换为8位图像,只保留最重要的字节,并将此版本传递给imgBayerColorDecode,我希望看到RGB图像。 / p>

为此,我将imgbuffer转换为numpy数组并将2位数据移位:

np_buffer = np.core.multiarray.int_asbuffer(
    ctypes.addressof(imgbuffer.contents), bufsize)
flat_data = np.frombuffer(np_buffer, np.uint16)

# from 10 bit to 8 bit, keeping only the non-empty bytes
Z = (flat_data>>2).view(dtype='uint8')[::2] 
Z2 = Z.copy()  # just in case

现在我将ndarray Z2传递给imgBayerColorDecode:

bitsperpixel = 8
imaq.imgBayerColorDecode(RGBbuffer, Z2.ctypes.data_as(
    ctypes.POINTER(ctypes.c_uint8)), height, width, 
    width, width, redLUT, greenLUT, blueLUT, 
    IMG_BAYER_PATTERN_BGBG_GRGR, bitsperpixel, 0)

注意原始代码(如上所示)略有改变,因此redLUt,greenLUT和blueLUT现在只有256个元素数组。 最后我打电话给imaq.imgSessionSaveBufferEx(sid,RGBbuffer, save_path)。但它仍然是一个灰度级,img形状没有保留,所以我仍然做了一些非常错误的事情。有什么想法吗?

1 个答案:

答案 0 :(得分:0)

经过一段时间的游戏后,发现所提到的RGB缓冲区必须保存正确的数据,但imgSessionSaveBufferEx在这一点上做了一些奇怪的事情。

当我将数据从RGBbuffer传递回numpy时,将此1D数组重新整形为图像的维度,然后通过屏蔽和使用bitshift操作(例如red_channel = (np_RGB & 0XFF000000)>>16)将其拆分为颜色通道,然后我可以使用PIL或pypng将其保存为png格式的漂亮彩色图像。

我还没有发现为什么imgSessionSaveBufferEx表现得很奇怪,但上面的解决方案有效(尽管速度方面它真的很低效)。