我正在尝试从原始论文中实现该方法(用于人脸的增强型局部纹理特征集
在Python 3.6和Opencv 4.2中,由Xiaoyang Tan和Bill Triggs在困难的光照条件下进行识别),但是当我对图像进行预处理时,其结果与本文中的结果不同
尽管我使用了指定的相同参数:
1-用于伽玛校正,gamma = 0.2
2-对于DoG,(sigma0 = 1,sigma1 = 2)
3-对于对比度均衡,tau = 10和alpha = 0.1
这是预期的结果以及我得到的结果:
Original Image resulted image expected image
这是我使用的代码:
img_original = cv.imread('C:/Users/Ouss/Desktop/TP-LTP/face.jpg', cv.IMREAD_GRAYSCALE)
# gamma correction
lookUpTable = np.empty((1, 256), np.uint8)
for i in range(256):
# calculating the new values
lookUpTable[0, i] = np.clip(pow(i / 255.0, 2) * 255.0, 0, 255)
# mapping the new values with the original
gamma_corrected_img = cv.LUT(img_original, lookUpTable)
# DOG
blur1 = cv.GaussianBlur(gamma_corrected_img, (3, 3), 1, borderType=cv.BORDER_REPLICATE)
blur2 = cv.GaussianBlur(gamma_corrected_img, (7, 7), 2, borderType=cv.BORDER_REPLICATE)
dog_img = cv.subtract(blur1, blur2)
# contrast equalisation
# step 1
alpha = 0.1
tau = 10
temp1 = pow(np.abs(dog_img), alpha)
meanImg = np.mean(temp1)
Contrast_Equa_step01 = dog_img / pow(meanImg, 1/alpha)
# step 2
minMat = np.abs(Contrast_Equa_step01)
minMat[minMat > tau] = tau
temp2 = pow(minMat, alpha)
meanImg2 = np.mean(temp2)
Contrast_Equa_step02 = Contrast_Equa_step01 / pow(meanImg2, 1/alpha)
CEqualized_img = tau * np.tanh((Contrast_Equa_step02/tau))
答案 0 :(得分:0)
有几个可能的问题。
1)您应将图像标准化为0到1的范围,因为在所有操作中都将其浮动,然后将其缩放到0到255的整数以得到最终结果。
2)您使用的是gamma = 2,而不是gamma = 0.2
3)使用sigma值计算高斯模糊时,应将尺寸设置为0。请参见https://docs.opencv.org/4.1.1/d4/d86/group__imgproc__filter.html#gaabe8c836e97159a9193fb0b11ac52cf1
4)通常,DoG偏向0.5
这是在Python / OpenCV中计算Gamma增强图像以及将DoG图像与原始面部图像分开计算的简单示例。
输入:
import cv2
img = cv2.imread('face.jpg', cv2.IMREAD_GRAYSCALE).astype('float32') / 255.0
# gamma correction
img_gamma = img**0.2
img_gamma = (255.0 * img_gamma).clip(0,255).astype('uint8')
# DOG
blur1 = cv2.GaussianBlur(img, (0,0), 1, borderType=cv2.BORDER_REPLICATE)
blur2 = cv2.GaussianBlur(img, (0,0), 2, borderType=cv2.BORDER_REPLICATE)
# compute difference and bias to 0.5
img_dog1 = blur2 - blur1 + 0.5
img_dog1 = (255.0 * img_dog1).clip(0,255).astype('uint8')
# Or compute difference and add back to image as band pass boost filter
img_dog2 = blur2 - blur1 + img
img_dog2 = (255.0 * img_dog2).clip(0,255).astype('uint8')
# show results
cv2.imshow('Face', img)
cv2.imshow('Gamma', img_gamma)
cv2.imshow('DOG1', img_dog1)
cv2.imshow('DOG2', img_dog2)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save results
cv2.imwrite('face_gamma.jpg', img_gamma)
cv2.imwrite('face_dog1.jpg', img_dog1)
cv2.imwrite('face_dog2.jpg', img_dog2)
输入的Gamma增强结果:
来自输入的DoG Result1:
来自输入的DoG Result2:
也许这些建议将有助于您的完整处理。
答案 1 :(得分:0)
我认为您的主要问题是tau = 10.0太大。在tau = 3.0时,这似乎对我有用,我的图像标准化为0到1范围内的浮点数。然后最后乘以255并转换为uint8。
这是我的Python / OpenCV代码。我保存了伽玛校正,DoG和对比度均衡的第一阶段的版本,这些版本按255缩放到uint8以进行查看。我还通过将最大绝对值除以将值拉伸到-1到1来对DoG进行了归一化(尽管不是必需的)。归一化为DoG提供了更好的对比度。我还交换了DoG中两个模糊图像的顺序,以匹配他的对比度极性。
输入:
import cv2
import numpy as np
# Reference: Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions by Xiaoyang Tan and Bill Triggs
# https://lear.inrialpes.fr/pubs/2007/TT07/Tan-amfg07a.pdf
# read image as grayscale float in range 0 to 1
img = cv2.imread('face.jpg', cv2.IMREAD_GRAYSCALE).astype(np.float64) / 255.0
# set arguments
gamma = 0.2
alpha = 0.1
tau = 3.0
# gamma correction
img_gamma = np.power(img, gamma)
img_gamma2 = (255.0 * img_gamma).clip(0,255).astype(np.uint8)
# DOG
blur1 = cv2.GaussianBlur(img_gamma, (0,0), 1, borderType=cv2.BORDER_REPLICATE)
blur2 = cv2.GaussianBlur(img_gamma, (0,0), 2, borderType=cv2.BORDER_REPLICATE)
img_dog = (blur1 - blur2)
# normalize by the largest absolute value so range is -1 to
img_dog = img_dog / np.amax(np.abs(img_dog))
img_dog2 = (255.0 * (0.5*img_dog + 0.5)).clip(0,255).astype(np.uint8)
# contrast equalization equation 1
img_contrast1 = np.abs(img_dog)
img_contrast1 = np.power(img_contrast1, alpha)
img_contrast1 = np.mean(img_contrast1)
img_contrast1 = np.power(img_contrast1,1.0/alpha)
img_contrast1 = img_dog/img_contrast1
# contrast equalization equation 2
img_contrast2 = np.abs(img_contrast1)
img_contrast2 = img_contrast2.clip(0,tau)
img_contrast2 = np.mean(img_contrast2)
img_contrast2 = np.power(img_contrast2,1.0/alpha)
img_contrast2 = img_contrast1/img_contrast2
img_contrast = tau * np.tanh((img_contrast2/tau))
# Scale results two ways back to uint8 in the range 0 to 255
img_contrastA = (255.0 * (img_contrast+0.5)).clip(0,255).astype(np.uint8)
img_contrastB = (255.0 * (0.5*img_contrast+0.5)).clip(0,255).astype(np.uint8)
# show results
cv2.imshow('Face', img)
cv2.imshow('Gamma', img_gamma2)
cv2.imshow('DoG', img_dog2)
cv2.imshow('CE1', img_contrast1)
cv2.imshow('CE_A', img_contrastA)
cv2.imshow('CE_B', img_contrastB)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save results
cv2.imwrite('face_contrast_equalization_A.jpg', img_contrastA)
cv2.imwrite('face_contrast_equalization_B.jpg', img_contrastB)
根据从浮点数到uint8的结果如何在0到255之间进行缩放,一种方法会得出略有不同的结果。第一种方法在乘以255之前先偏置0.5,第二种乘以0.5,然后在乘以255之前偏置0.5。方法A可能更接近参考文献的作者。
将tau调高或调低以获得所需的对比度。