我想要实现的任务是将Photoshop RGB复制到LAB转换 为简单起见,我将描述我只提取L通道所做的工作。
这是包含所有RGB颜色的RGB图像(请点击并下载):
为了提取Photoshop的LAB,我所做的是以下内容:
这是Photoshop的L通道(这正是在LAB模式下选择L通道时在屏幕上看到的内容):
我的主要参考是Bruce Lindbloom great site 也称为Photoshop is using D50 White Point in its LAB Mode(另见Wikipedia's LAB Color Space Page)。
假设RGB图像采用sRGB格式,转换由:
给出 sRGB -> XYZ (White Point D65) -> XYZ (White Point D50) -> LAB
假设数据在[0,1]范围内的Float中,则阶段由下式给出:
因为,首先,我只是在L频道之后,事情变得更简单了。图像被加载到MATLAB中并转换为Float [0,1]范围。
这是代码:
%% Setting Enviorment Parameters
INPUT_IMAGE_RGB = 'RgbColors.png';
INPUT_IMAGE_L_PHOTOSHOP = 'RgbColorsL.png';
%% Loading Data
mImageRgb = im2double(imread(INPUT_IMAGE_RGB));
mImageLPhotoshop = im2double(imread(INPUT_IMAGE_L_PHOTOSHOP));
mImageLPhotoshop = mImageLPhotoshop(:, :, 1); %<! All channels are identical
%% Convert to L Channel
mImageLMatlab = ConvertRgbToL(mImageRgb, 1);
%% Display Results
figure();
imshow(mImageLPhotoshop);
title('L Channel - Photoshop');
figure();
imshow(mImageLMatlab);
title('L Channel - MATLAB');
函数ConvertRgbToL()
由:
function [ mLChannel ] = ConvertRgbToL( mRgbImage, sRgbMode )
OFF = 0;
ON = 1;
RED_CHANNEL_IDX = 1;
GREEN_CHANNEL_IDX = 2;
BLUE_CHANNEL_IDX = 3;
RGB_TO_Y_MAT = [0.2225045, 0.7168786, 0.0606169]; %<! D50
Y_CHANNEL_THR = 0.008856;
% sRGB Compensation
if(sRgbMode == ON)
vLinIdx = mRgbImage < 0.04045;
mRgbImage(vLinIdx) = mRgbImage(vLinIdx) ./ 12.92;
mRgbImage(~vLinIdx) = ((mRgbImage(~vLinIdx) + 0.055) ./ 1.055) .^ 2.4;
end
% RGB to XYZ (D50)
mY = (RGB_TO_Y_MAT(1) .* mRgbImage(:, :, RED_CHANNEL_IDX)) + (RGB_TO_Y_MAT(2) .* mRgbImage(:, :, GREEN_CHANNEL_IDX)) + (RGB_TO_Y_MAT(3) .* mRgbImage(:, :, BLUE_CHANNEL_IDX));
vYThrIdx = mY > Y_CHANNEL_THR;
mY3 = mY .^ (1 / 3);
mLChannel = ((vYThrIdx .* (116 * mY3 - 16.0)) + ((~vYThrIdx) .* (903.3 * mY))) ./ 100;
end
可以看出结果不同。
对于大多数颜色而言,Photoshop要暗得多。
任何人都知道如何复制Photoshop的LAB转换?
任何人都可以在此代码中发现问题吗?
谢谢。
答案 0 :(得分:1)
最新答案(我们知道现在错了,等待正确答案)
Photoshop是一款非常古老而凌乱的软件。当您执行从模式到另一个模式的转换时,没有明确的文档说明为什么像素值会发生这种或那种情况。您的问题发生是因为当您在Adobe Photoshop中将选定的L *通道转换为灰度时,伽马值会发生变化。本地,转换使用1.74的伽玛进行单通道到灰度转换。不要问我为什么,我猜这与旧的激光打印机有关(?)。
无论如何,这是我发现的最佳方式:
打开文件,将其转为LAB模式,仅选择L通道
然后转到:
编辑&gt;转换为个人资料
您将选择“自定义伽玛”并输入值2.0(不要问我为什么2.0效果更好,我不知道Adobe软件制造商的想法是什么......) 此操作会将您的图片转换为只有一个通道的灰度图片
然后你可以将它转换为RGB模式。
如果您将结果与结果进行比较,您会发现差异最多为4个点% - 都位于最黑暗的区域。
我怀疑这是因为伽玛曲线应用程序不适用于黑暗值的LAB模式(如您所知,所有低于0.008856的XYZ值在LAB中都是线性的)
结论:
据我所知,Adobe Photoshop中没有适当的实现方式将L通道从LAB模式提取到灰色模式!
上一个回答
这是我用自己的方法得到的结果:
它似乎与Adobe Photoshop的结果完全相同。
我不确定你方面出了什么问题,因为你所描述的步骤与我所遵循的步骤完全相同,我建议你遵循。我没有Matlab所以我使用了python:
import cv2, Syn
# your file
fn = "EASA2.png"
#reading the file
im = cv2.imread(fn,-1)
#openCV works in BGR, i'm switching to RGB
im = im[:,:,::-1]
#conversion to XYZ
XYZ = Syn.sRGB2XYZ(im)
#white points D65 and D50
WP_D65 = Syn.Yxy2XYZ((100,0.31271, 0.32902))
WP_D50 = Syn.Yxy2XYZ((100,0.34567, 0.35850))
#bradford
XYZ2 = Syn.bradford_adaptation(XYZ, WP_D65, WP_D50)
#conversion to L*a*b*
LAB = Syn.XYZ2Lab(XYZ2, WP_D50)
#picking the L channel only
L = LAB[:,:,0] /100. * 255.
#image output
cv2.imwrite("result.png", L)
Syn库是我自己的东西,这里是函数(抱歉这个烂摊子):
def sRGB2XYZ(sRGB):
sRGB = np.array(sRGB)
aShape = np.array([1,1,1]).shape
anotherShape = np.array([[1,1,1],[1,1,1]]).shape
origShape = sRGB.shape
if sRGB.shape == aShape:
sRGB = np.reshape(sRGB, (1,1,3))
elif len(sRGB.shape) == len(anotherShape):
h,d = sRGB.shape
sRGB = np.reshape(sRGB, (1,h,d))
w,h,d = sRGB.shape
sRGB = np.reshape(sRGB, (w*h,d)).astype("float") / 255.
m1 = sRGB[:,0] > 0.04045
m1b = sRGB[:,0] <= 0.04045
m2 = sRGB[:,1] > 0.04045
m2b = sRGB[:,1] <= 0.04045
m3 = sRGB[:,2] > 0.04045
m3b = sRGB[:,2] <= 0.04045
sRGB[:,0][m1] = ((sRGB[:,0][m1] + 0.055 ) / 1.055 ) ** 2.4
sRGB[:,0][m1b] = sRGB[:,0][m1b] / 12.92
sRGB[:,1][m2] = ((sRGB[:,1][m2] + 0.055 ) / 1.055 ) ** 2.4
sRGB[:,1][m2b] = sRGB[:,1][m2b] / 12.92
sRGB[:,2][m3] = ((sRGB[:,2][m3] + 0.055 ) / 1.055 ) ** 2.4
sRGB[:,2][m3b] = sRGB[:,2][m3b] / 12.92
sRGB *= 100.
X = sRGB[:,0] * 0.4124 + sRGB[:,1] * 0.3576 + sRGB[:,2] * 0.1805
Y = sRGB[:,0] * 0.2126 + sRGB[:,1] * 0.7152 + sRGB[:,2] * 0.0722
Z = sRGB[:,0] * 0.0193 + sRGB[:,1] * 0.1192 + sRGB[:,2] * 0.9505
XYZ = np.zeros_like(sRGB)
XYZ[:,0] = X
XYZ[:,1] = Y
XYZ[:,2] = Z
XYZ = np.reshape(XYZ, origShape)
return XYZ
def Yxy2XYZ(Yxy):
Yxy = np.array(Yxy)
aShape = np.array([1,1,1]).shape
anotherShape = np.array([[1,1,1],[1,1,1]]).shape
origShape = Yxy.shape
if Yxy.shape == aShape:
Yxy = np.reshape(Yxy, (1,1,3))
elif len(Yxy.shape) == len(anotherShape):
h,d = Yxy.shape
Yxy = np.reshape(Yxy, (1,h,d))
w,h,d = Yxy.shape
Yxy = np.reshape(Yxy, (w*h,d)).astype("float")
XYZ = np.zeros_like(Yxy)
XYZ[:,0] = Yxy[:,1] * ( Yxy[:,0] / Yxy[:,2] )
XYZ[:,1] = Yxy[:,0]
XYZ[:,2] = ( 1 - Yxy[:,1] - Yxy[:,2] ) * ( Yxy[:,0] / Yxy[:,2] )
return np.reshape(XYZ, origShape)
def bradford_adaptation(XYZ, Neutral_source, Neutral_destination):
"""should be checked if it works properly, but it seems OK"""
XYZ = np.array(XYZ)
ashape = np.array([1,1,1]).shape
siVal = False
if XYZ.shape == ashape:
XYZ = np.reshape(XYZ, (1,1,3))
siVal = True
bradford = np.array(((0.8951000, 0.2664000, -0.1614000),
(-0.750200, 1.7135000, 0.0367000),
(0.0389000, -0.068500, 1.0296000)))
inv_bradford = np.array(((0.9869929, -0.1470543, 0.1599627),
(0.4323053, 0.5183603, 0.0492912),
(-.0085287, 0.0400428, 0.9684867)))
Xs,Ys,Zs = Neutral_source
s = np.array(((Xs),
(Ys),
(Zs)))
Xd,Yd,Zd = Neutral_destination
d = np.array(((Xd),
(Yd),
(Zd)))
source = np.dot(bradford, s)
Us,Vs,Ws = source[0], source[1], source[2]
destination = np.dot(bradford, d)
Ud,Vd,Wd = destination[0], destination[1], destination[2]
transformation = np.array(((Ud/Us, 0, 0),
(0, Vd/Vs, 0),
(0, 0, Wd/Ws)))
M = np.mat(inv_bradford)*np.mat(transformation)*np.mat(bradford)
w,h,d = XYZ.shape
result = np.dot(M,np.rot90(np.reshape(XYZ, (w*h,d)),-1))
result = np.rot90(result, 1)
result = np.reshape(np.array(result), (w,h,d))
if siVal == False:
return result
else:
return result[0,0]
def XYZ2Lab(XYZ, neutral):
"""transforms XYZ to CIE Lab
Neutral should be normalized to Y = 100"""
XYZ = np.array(XYZ)
aShape = np.array([1,1,1]).shape
anotherShape = np.array([[1,1,1],[1,1,1]]).shape
origShape = XYZ.shape
if XYZ.shape == aShape:
XYZ = np.reshape(XYZ, (1,1,3))
elif len(XYZ.shape) == len(anotherShape):
h,d = XYZ.shape
XYZ = np.reshape(XYZ, (1,h,d))
N_x, N_y, N_z = neutral
w,h,d = XYZ.shape
XYZ = np.reshape(XYZ, (w*h,d)).astype("float")
XYZ[:,0] = XYZ[:,0]/N_x
XYZ[:,1] = XYZ[:,1]/N_y
XYZ[:,2] = XYZ[:,2]/N_z
m1 = XYZ[:,0] > 0.008856
m1b = XYZ[:,0] <= 0.008856
m2 = XYZ[:,1] > 0.008856
m2b = XYZ[:,1] <= 0.008856
m3 = XYZ[:,2] > 0.008856
m3b = XYZ[:,2] <= 0.008856
XYZ[:,0][m1] = XYZ[:,0][XYZ[:,0] > 0.008856] ** (1/3.0)
XYZ[:,0][m1b] = ( 7.787 * XYZ[:,0][m1b] ) + ( 16 / 116.0 )
XYZ[:,1][m2] = XYZ[:,1][XYZ[:,1] > 0.008856] ** (1/3.0)
XYZ[:,1][m2b] = ( 7.787 * XYZ[:,1][m2b] ) + ( 16 / 116.0 )
XYZ[:,2][m3] = XYZ[:,2][XYZ[:,2] > 0.008856] ** (1/3.0)
XYZ[:,2][m3b] = ( 7.787 * XYZ[:,2][m3b] ) + ( 16 / 116.0 )
Lab = np.zeros_like(XYZ)
Lab[:,0] = (116. * XYZ[:,1] ) - 16.
Lab[:,1] = 500. * ( XYZ[:,0] - XYZ[:,1] )
Lab[:,2] = 200. * ( XYZ[:,1] - XYZ[:,2] )
return np.reshape(Lab, origShape)
答案 1 :(得分:1)
Photoshop中色彩空间之间的所有转换都是通过CMM进行的,这在大约2000年的硬件上足够快,并且不够准确。如果选中“循环”-RGB-> Lab-> RGB,则使用Adobe CMM可能会有很多4位错误和一些7位错误。这可能会导致后代化。我的转换始终基于公式,而不是基于CMM。但是,使用Adobe CMM和Argyll CMM的平均误差增量是可以接受的。
Lab转换与RGB非常相似,第一步只应用了非线性(gamma)。像这样的东西:
将XYZ归一化为白点
将结果带到gamma 3(保持阴影部分呈线性,取决于实现方式)
将结果乘以[0 116 0 -16; 500 -500 0 0; 0 200 -200 0]'