通过调整输入向量的子集来最大化神经网络的输出

时间:2019-10-22 16:58:04

标签: keras deep-learning backpropagation

我有一个过程,在给定向量X的情况下,通过生成随机向量Y,将给出实数值输出(实际上始终为非负标量)。问题是从X x Y-> R +映射此函数是未知的。因此,我的方法是首先使用DNN学习此未知函数。然后使用一些非线性优化(最深的上升等)来最大化输出。但是,我不太了解该怎么做。您能告诉我最好的方法吗?谢谢!

这是我的网络体系结构。

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_5 (InputLayer)            (None, 2, 62)        0                                            
__________________________________________________________________________________________________
lambda_15 (Lambda)              (None, 60)           0           input_5[0][0]                    
__________________________________________________________________________________________________
lambda_16 (Lambda)              (None, 60)           0           input_5[0][0]                    
__________________________________________________________________________________________________
dense_14 (Dense)                (None, 60)           3660        lambda_15[0][0]                  
                                                                 lambda_16[0][0]                  
__________________________________________________________________________________________________
concatenate_4 (Concatenate)     (None, 120)          0           dense_14[0][0]                   
                                                                 dense_14[1][0]                   
__________________________________________________________________________________________________
dropout_5 (Dropout)             (None, 120)          0           concatenate_4[0][0]              
__________________________________________________________________________________________________
lambda_13 (Lambda)              (None, 2)            0           input_5[0][0]                    
__________________________________________________________________________________________________
lambda_14 (Lambda)              (None, 2)            0           input_5[0][0]                    
__________________________________________________________________________________________________
dense_17 (Dense)                (None, 60)           7260        dropout_5[0][0]                  
__________________________________________________________________________________________________
dense_15 (Dense)                (None, 10)           30          lambda_13[0][0]                  
__________________________________________________________________________________________________
dense_16 (Dense)                (None, 10)           30          lambda_14[0][0]                  
__________________________________________________________________________________________________
concatenate_5 (Concatenate)     (None, 80)           0           dense_17[0][0]                   
                                                                 dense_15[0][0]                   
                                                                 dense_16[0][0]                   
__________________________________________________________________________________________________
dropout_7 (Dropout)             (None, 80)           0           concatenate_5[0][0]              
__________________________________________________________________________________________________
dense_18 (Dense)                (None, 1)            81          dropout_7[0][0]                  
==================================================================================================
Total params: 11,061
Trainable params: 11,061
Non-trainable params: 0

基本上,每个实例的输入向量是2 x 62数组。第一个2x60是固定的(如上面的X)。最后两列是2个随机向量Y1和Y2。因此,模型中的Lambda层将提取这些单独的组件。现在我想知道如何找到Y1和Y2以在给定X的情况下最大化特定实例的输出。

0 个答案:

没有答案