使用OpenCV进行网络摄像头颜色校准

时间:2017-08-28 20:26:44

标签: python opencv image-processing computer-vision webcam

在网上为一个项目订购了六个网络摄像头,我注意到输出上的颜色不一致。

为了弥补这一点,我尝试拍摄模板图像并提取R,G和B直方图,并尝试根据此比较目标图像的RGB直方图。

这是从对非常类似问题Comparative color calibration

的解决方案的描述中获得启发的

完美的解决方案将如下所示:

enter image description here

为了试图解决这个问题,我编写了以下脚本,表现不佳:

编辑(感谢@DanMašek和@ api55)

import numpy as np

def show_image(title, image, width = 300):
    # resize the image to have a constant width, just to
    # make displaying the images take up less screen real
    # estate
    r = width / float(image.shape[1])
    dim = (width, int(image.shape[0] * r))
    resized = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)

    # show the resized image
    cv2.imshow(title, resized)


def hist_match(source, template):
    """
    Adjust the pixel values of a grayscale image such that its histogram
    matches that of a target image

    Arguments:
    -----------
        source: np.ndarray
            Image to transform; the histogram is computed over the flattened
            array
        template: np.ndarray
            Template image; can have different dimensions to source
    Returns:
    -----------
        matched: np.ndarray
            The transformed output image
    """

    oldshape = source.shape
    source = source.ravel()
    template = template.ravel()

    # get the set of unique pixel values and their corresponding indices and
    # counts
    s_values, bin_idx, s_counts = np.unique(source, return_inverse=True,
                                            return_counts=True)
    t_values, t_counts = np.unique(template, return_counts=True)

    # take the cumsum of the counts and normalize by the number of pixels to
    # get the empirical cumulative distribution functions for the source and
    # template images (maps pixel value --> quantile)
    s_quantiles = np.cumsum(s_counts).astype(np.float64)
    s_quantiles /= s_quantiles[-1]
    t_quantiles = np.cumsum(t_counts).astype(np.float64)
    t_quantiles /= t_quantiles[-1]

    # interpolate linearly to find the pixel values in the template image
    # that correspond most closely to the quantiles in the source image
    interp_t_values = np.interp(s_quantiles, t_quantiles, t_values)

    return interp_t_values[bin_idx].reshape(oldshape)

from matplotlib import pyplot as plt
from scipy.misc import lena, ascent
import cv2

source = cv2.imread('/media/somadetect/Lexar/color_transfer_data/1/frame10.png')
s_b = source[:,:,0]
s_g = source[:,:,1]
s_r = source[:,:,2]
template =  cv2.imread('/media/somadetect/Lexar/color_transfer_data/5/frame6.png')
t_b = source[:,:,0]
t_r = source[:,:,1]
t_g = source[:,:,2]

matched_b = hist_match(s_b, t_b)
matched_g = hist_match(s_g, t_g)
matched_r = hist_match(s_r, t_r)

y,x,c = source.shape
transfer  = np.empty((y,x,c), dtype=np.uint8)

transfer[:,:,0] = matched_r
transfer[:,:,1] = matched_g
transfer[:,:,2] = matched_b

show_image("Template", template)
show_image("Target", source)
show_image("Transfer", transfer)
cv2.waitKey(0)

模板图片:

enter image description here

目标图片:

enter image description here

匹配图片:

enter image description here

然后我发现Adrian的(pyimagesearch)尝试在以下链接中解决一个非常类似的问题

Fast Color Transfer

结果似乎相当不错,有一些饱和缺陷。我欢迎任何有关如何解决此问题的建议或指示,以便可以校准所有网络摄像头输出,以根据一个模板图像输出相似的颜色。

2 个答案:

答案 0 :(得分:1)

您的脚本执行效果不佳,因为您使用了错误的索引。

OpenCV图像是BGR,所以这在你的代码中是正确的:

source = cv2.imread('/media/somadetect/Lexar/color_transfer_data/1/frame10.png')
s_b = source[:,:,0]
s_g = source[:,:,1]
s_r = source[:,:,2]
template =  cv2.imread('/media/somadetect/Lexar/color_transfer_data/5/frame6.png')
t_b = source[:,:,0]
t_r = source[:,:,1]
t_g = source[:,:,2]

但这是错误的

transfer[:,:,0] = matched_r
transfer[:,:,1] = matched_g
transfer[:,:,2] = matched_b

因为你在这里使用的是RGB而不是BGR,所以颜色会发生变化而你的OpenCV仍然认为它是BGR。这就是为什么它看起来很奇怪。

应该是:

transfer[:,:,0] = matched_b
transfer[:,:,1] = matched_g
transfer[:,:,2] = matched_r

作为其他可能的解决方案,您可以尝试查看可以在相机中设置的参数。有时它们有一些自动参数,您可以手动设置它们以匹配所有参数。另外,要注意这些自动参数,通常是白平衡和焦点,其他都是自动设置的,它们可能会在同一个相机中从一次到另一次变化很多(取决于照明等)。

更新:

DanMašek指出,

t_b = source[:,:,0]
t_r = source[:,:,1]
t_g = source[:,:,2]

是错误的,因为r应该是索引2和g索引1

t_b = source[:,:,0]
t_g = source[:,:,1]
t_r = source[:,:,2]

答案 1 :(得分:1)

我尝试过基于白色补丁的校准程序。这是链接https://theiszm.wordpress.com/tag/white-balance/

代码段如下:

import cv2
import math
import numpy as np
import sys
from matplotlib import pyplot as plt

def hist_match(source, template):
    """
    Adjust the pixel values of a grayscale image such that its histogram
    matches that of a target image

    Arguments:
    -----------
        source: np.ndarray
            Image to transform; the histogram is computed over the flattened
            array
        template: np.ndarray
            Template image; can have different dimensions to source
    Returns:
    -----------
        matched: np.ndarray
            The transformed output image
    """

    oldshape = source.shape
    source = source.ravel()
    template = template.ravel()

    # get the set of unique pixel values and their corresponding indices and
    # counts
    s_values, bin_idx, s_counts = np.unique(source, return_inverse=True,
                                            return_counts=True)
    t_values, t_counts = np.unique(template, return_counts=True)

    # take the cumsum of the counts and normalize by the number of pixels to
    # get the empirical cumulative distribution functions for the source and
    # template images (maps pixel value --> quantile)
    s_quantiles = np.cumsum(s_counts).astype(np.float64)
    s_quantiles /= s_quantiles[-1]
    t_quantiles = np.cumsum(t_counts).astype(np.float64)
    t_quantiles /= t_quantiles[-1]

    # interpolate linearly to find the pixel values in the template image
    # that correspond most closely to the quantiles in the source image
    interp_t_values = np.interp(s_quantiles, t_quantiles, t_values)
    return interp_t_values[bin_idx].reshape(oldshape)

# Read original image
im_o = cv2.imread('/media/Lexar/color_transfer_data/5/frame10.png')
im = im_o
cv2.imshow('Org',im)
cv2.waitKey()

B = im[:,:, 0]
G = im[:,:, 1]
R = im[:,:, 2]

R= np.array(R).astype('float')
G= np.array(G).astype('float')
B= np.array(B).astype('float')

# Extract pixels that correspond to pure white R = 255,G = 255,B = 255
B_white = R[168, 351]
G_white = G[168, 351]
R_white = B[168, 351]

print B_white
print G_white
print R_white

# Compensate for the bias using normalization statistics
R_balanced = R / R_white
G_balanced = G / G_white
B_balanced = B / B_white

R_balanced[np.where(R_balanced > 1)] = 1
G_balanced[np.where(G_balanced > 1)] = 1
B_balanced[np.where(B_balanced > 1)] = 1

B_balanced=B_balanced * 255
G_balanced=G_balanced * 255
R_balanced=R_balanced * 255

B_balanced= np.array(B_balanced).astype('uint8')
G_balanced= np.array(G_balanced).astype('uint8')
R_balanced= np.array(R_balanced).astype('uint8')

im[:,:, 0] = (B_balanced)
im[:,:, 1] = (G_balanced)
im[:,:, 2] = (R_balanced)

# Notice saturation artifacts 
cv2.imshow('frame',im)
cv2.waitKey()

# Extract the Y plane in original image and match it to the transformed image 
im_o = cv2.cvtColor(im_o, cv2.COLOR_BGR2YCR_CB)
im_o_Y = im_o[:,:,0]

im = cv2.cvtColor(im, cv2.COLOR_BGR2YCR_CB)
im_Y = im[:,:,0]

matched_y = hist_match(im_o_Y, im_Y)
matched_y= np.array(matched_y).astype('uint8')
im[:,:,0] = matched_y

im_final = cv2.cvtColor(im, cv2.COLOR_YCR_CB2BGR)
cv2.imshow('frame',im_final)
cv2.waitKey()

输入图像为:

enter image description here

脚本的结果是:

enter image description here

谢谢大家的建议和指示!!