我正在尝试使用卷积神经网络来计算图像中圆的半径。 我只有图像作为输入,而半径在输出侧,所以映射是[image]-> [circus of circle]。
输入维度和神经网络架构如下:
from tensorflow.keras import layers
from tensorflow.keras import Model
img_input = layers.Input(shape=(imgsize, imgsize, 1))
x = layers.Conv2D(16, (3,3), activation='relu', strides =1, padding = 'same')(img_input)
x = layers.Conv2D(32, (3,3), activation='relu', strides = 2)(x)
x = layers.Conv2D(128, (3,3), activation='relu', strides = 2)(x)
x = layers.MaxPool2D(pool_size=2)(x)
x = layers.Conv2D(circle_per_box, 1, activation='linear', strides = 2)(x)
output = layers.Flatten()(x)
model_CNN = Model(img_input, output)
model_CNN.summary()
model_CNN.compile(loss='mean_squared_error',optimizer= 'adam', metrics=['mse'])
X_train, X_test, Y_train, Y_test = train_test_split(image, radii, test_size=0.2, random_state=0)
print(X_train.shape, X_test.shape, Y_train.shape, Y_test.shape)
(8000, 12, 12, 1) (2000, 12, 12, 1) (8000, 1) (2000, 1)
Y_train
array([[1.01003947],
[1.32057104],
[0.34507285],
...,
[1.53130402],
[0.69527609],
[1.85973669]])
如果我为每张图片计算一个圆圈,则会得到可靠的结果:
每个图像有更多的圆圈(请参见图像),但是,相同的网络崩溃了,我得到以下结果:
Y.train的形状连续2圈显示为:
Y_train.shape
(10000, 2)
Y.train
array([[1.81214007, 0.68388911],
[1.47920612, 1.04222943],
[1.90827465, 1.43238623],
...,
[1.40865229, 1.65726638],
[0.52878558, 1.94234548],
[1.57923437, 1.19544775]])
为什么神经网络以这种方式运行? 如上所述,如果我尝试分别计算图像中两个生成的圆的半径,我将再次获得良好的结果,但是如果图像中同时存在两个圆,则不会得到很好的结果。
有人有什么想法/建议吗?