具有非标准输入的神经网络

时间:2017-09-25 19:26:54

标签: keras

我想制作一个神经网络,它将图像+图像+值作为输入,对图像进行卷积+合并,然后对结果进行线性变换。我可以在keras那样做吗?

2 个答案:

答案 0 :(得分:1)

假设您的图像是RGB类型,图像的形状是(宽度,高度,3),您可以将两个图像与 import numpy as np from PIL import Image img1 = Image.open('image1.jpg') img2 = Image.open('imgae2.jpg') img1 = img1.resize((width,height)) img2 = img2.resize((width,height)) img1_arr = np.asarray(img1,dtype='int32') img2_arr = np.asarray(img2,dtype='int32') #shape of img_arr is (width,height,6) img_arr = np.concatenate((img1_arr,img2_arr),axis=2) 结合起来,如:

concatenate()

以这种方式组合两个图像,我们只增加通道,所以我们仍然可以在前两个轴上进行卷积。

<强>更新 我想你的意思是多任务模型,你想在卷积后合并两个图像,Keras有 input_tensor = Input(shape=(channels, img_width, img_height)) # Task1 on image1 conv_model1 = VGG16(input_tensor=input_tensor, weights=None, include_top=False, classes=classes, input_shape=(channels, img_width, img_height)) conv_output1 = conv_model1.output flatten1 = Flatten()(conv_output1) # Task2 on image2 conv_model2 = VGG16(input_tensor=input_tensor, weights=None, include_top=False, classes=classes, input_shape=(channels, img_width, img_height)) conv_output2 = conv_model2.output flatten2 = Flatten()(conv_output2) # Merge the output merged = concatenate([conv_output1, conv_output2], axis=1) merged = Dense(classes,activation='softmax')(merged) # add some Dense layers and Dropout, final_model = Model(inputs=[input_tensor,input_tensor],outputs=merged) 可以做到这一点。

SELECT ProductCode, ProductName
FROM
(
SELECT TOP 100 PERCENT
    MProduct.ProductCode, MProduct.ProductName, COUNT(*) AS Ranges
FROM 
    TProblem 
FULL OUTER JOIN
    MProduct ON TProblem.ProductCode = MProduct.ProductCode 
GROUP BY  
    MProduct.ProductCode, MProduct.ProductName 
ORDER BY 
    Ranges DESC
) AS DATA

答案 1 :(得分:1)

这在结构上类似于Craig Li的答案,但是它是图像,图像,数值格式,不使用VGG16,只使用香草CNN。这些是3个独立的网络,其输出在单独处理后连接在一起,结果的连接向量通过最后的层,包括来自所有输入的信息。

input_1 = Input(data_1.shape[1:], name = 'input_1')
conv_branch_1 = Conv2D(filters, (kernel_size, kernel_size),
                 activation = LeakyReLU())(conv_branch_1)
conv_branch_1 = MaxPooling2D(pool_size = (2,2))(conv_branch_1)
conv_branch_1 = Flatten()(conv_branch_1)

input_2 = Input(data_2.shape[1:], name = 'input_2')
conv_branch_2 = Conv2D(filters, (kernel_size, kernel_size),
                 activation = LeakyReLU())(conv_branch_2)
conv_branch_2 = MaxPooling2D(pool_size = (2,2))(conv_branch_2)
conv_branch_2 = Flatten()(conv_branch_2)

value_input = Input(value_data.shape[1:], name = 'value_input')
fc_branch = Dense(80, activation=LeakyReLU())(value_input)

merged_branches = concatenate([conv_branch_1, conv_branch_2, fc_branch])
merged_branches = Dense(60, activation=LeakyReLU())(merged_branches)
merged_branches = Dropout(0.25)(merged_branches)
merged_branches = Dense(30, activation=LeakyReLU())(merged_branches)

merged_branches = Dense(1, activation='sigmoid')(merged_branches)

model = Model(inputs=[input_1, input_2, value_input], outputs=[merged_branches])

#if binary classification do this otherwise whatever loss you need

model.compile(loss='binary_crossentropy')