目标是尝试在模型的第一层上使用自定义权重来完全填充高通滤波器的功能-使模型的第一层与图像的高通滤波器相同。
1。首先,类似的解决方案将是:在图像处理中使用高通滤波器,并生成新图像,并将其用于模型中。 ---这必须使用图像处理,这是时间的花费。
2。我想设置Conv2D层,它也可以使图像高通。使用自定义过滤器(作为初始化器)。基本是过滤器和conv2D都使用了卷积规则。
,但结果与第一个解决方案不同。
#The image processing code:
kernel55 = np.array([[-1, 2, -2, 2, -1],
[2, -6, 8, -6, 2],
[-2, 8, -12, 8, -2],
[2,-6, 8, -6, 2],
[-1, 2, -2, 2, -1]])/12
# load the image, pre-process it, and store it in the data list
image = cv2.imread('1.pgm',-1)
image = ndimage.convolve(image, kernel55)
print(image)
#the first layer of the Model:
def kernel_init(shape):
kernel = np.zeros(shape)
kernel[:,:,0,0] = np.array([[-1, 2, -2, 2, -1],
[2, -6, 8, -6, 2],
[-2, 8, -12, 8, -2],
[2,-6, 8, -6, 2],
[-1, 2, -2, 2, -1]])/12
return kernel
#Build Keras model
model = Sequential()
model.add(Conv2D(1, [5,5], kernel_initializer=kernel_init,
input_shape=(256,256,1), padding="same",activation='relu'))
model.build()
test_im=cv2.imread('1.pgm',-1) # define a test image
test_im=np.expand_dims(np.expand_dims(np.array(test_im),2),0)
out = model.predict(test_im)
问题是: 使用图像处理能够生成适当的高通图像,但是使用Conv2D的结果并不相同。
我假设两个结果应该相同或相似,但事实并非如此……
为什么,我的想法有问题吗?
答案 0 :(得分:1)
对于不完整的答案表示歉意,但是我得到了部分解决的办法和一些解释。这是代码:
import cv2
import numpy as np
import scipy.ndimage as ndimage
from keras.models import Sequential
from keras.layers import Dense, Activation, Conv2D
#The image processing code:
#the first layer of the Model:
def kernel_init(shape):
kernel = np.zeros(shape)
kernel[:,:,0,0] = np.array([[-1, 2, -2, 2, -1],
[2, -6, 8, -6, 2],
[-2, 8, -12, 8, -2],
[2,-6, 8, -6, 2],
[-1, 2, -2, 2, -1]])
#kernel = kernel/12
#print("Here is the kernel")
#print(kernel)
#print("That was the kernel")
return kernel
def main():
print("starting")
kernel55 = np.array([[-1, 2, -2, 2, -1],
[2, -6, 8, -6, 2],
[-2, 8, -12, 8, -2],
[2,-6, 8, -6, 2],
[-1, 2, -2, 2, -1]])
# load the image, pre-process it, and store it in the data list
image = cv2.imread('tiger.bmp',-1)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
myimage = cv2.resize(gray,(256,256))
myimage = myimage
print("The image")
#print(myimage)
print("That was the image")
segment = myimage[0:10, 0:10]
print(segment)
imgOut = ndimage.convolve(myimage, kernel55)
#imgOut = imgOut/12
print(imgOut.shape)
cv2.imwrite('zzconv.png', imgOut)
#print(imgOut)
segment = imgOut[0:10, 0:10]
print(segment)
#Build Keras model
print("And the Keras stuff")
model = Sequential()
model.add(Conv2D(1, [5,5], kernel_initializer=kernel_init, input_shape=(256,256,1), padding="same"))
model.build()
test_im=myimage
test_im = test_im.reshape((1, 256, 256, 1))
print(test_im.shape)
imgOut2 = model.predict(test_im)
imgOut2 = imgOut2.reshape(256, 256)
print(imgOut2.shape)
#imgOut2 = imgOut2 / 12
imgOut2[imgOut2 < 0] += 256
cv2.imwrite('zzconv2.png', imgOut2)
#print(imgOut2)
segment = imgOut2[0:10, 0:10]
print(segment)
以下是需要注意的事项: