我会更好地解释:我想在模型中将某些图层应用于某些要素,在其他要素上使用其他图层,最后在最终图层中使用这两个图层的输出。
特别是:我有一个具有一些特征的数据集,其中有一个“热编码”,然后有50个其他数字列代表10年(每个具有5个特征)。
我想做的是:
我的问题是:*如何使某些特定功能进入某些特定层,以及如何将这些层的输出(以及分析其他功能的其他层)定向到应收集所有信息的某些最终层是从以前的那些中抽出来的?
我希望我足够清楚。非常感谢那些会回答的人。
编辑:
这是我到目前为止所做的,请告诉我您是否可以提出改进建议。
Ps。输入数据为正确形状的100%。
cat_in = k.layers.Input(shape=(X_train_p3.shape[1],), name='cat_in')
num_in = k.layers.Input(shape=(X_train_m.shape[1], X_train_m.shape[2]), name='num_in')
ovr_in = k.layers.Input(shape=(X_train_f.shape[1],), name='ovr_in')
a = k.layers.Dense(128, activation='relu', kernel_regularizer=l1(.1),
bias_regularizer=l1(.1), kernel_initializer='he_uniform')(cat_in)
a = k.layers.Dropout(.2)(a)
a = k.layers.Dense(64, activation='relu', kernel_regularizer=l1(.1),
bias_regularizer=l1(.1), kernel_initializer='he_uniform')(a)
a = k.layers.Dropout(.2)(a)
a = k.layers.Dense(64, activation='relu', kernel_regularizer=l1(.1),
bias_regularizer=l1(.1), kernel_initializer='he_uniform')(a)
b = k.layers.LSTM(128, activation='tanh', return_sequences=True, kernel_regularizer=l1(.1),
bias_regularizer=l1(.1), kernel_initializer='glorot_uniform')(num_in) #original --> needs changes?
b = k.layers.Dropout(.15)(b)
b = k.layers.LSTM(64, activation='tanh', return_sequences=True, kernel_regularizer=l1(.1),
bias_regularizer=l1(.1), kernel_initializer='he_uniform')(b)
b = k.layers.Dropout(.15)(b)
b = k.layers.LSTM(64, activation='tanh', kernel_regularizer=l1(.1),
bias_regularizer=l1(.1), kernel_initializer='he_uniform')(b)
b = k.layers.Dropout(.15)(b)
b = k.layers.Dense(64, activation='relu', kernel_regularizer=l1(.1),
bias_regularizer=l1(.1), kernel_initializer='he_uniform')(b)
b = k.layers.Dropout(.15)(b)
b = k.layers.Dense(64, activation='relu', kernel_regularizer=l1(.1),
bias_regularizer=l1(.1), kernel_initializer='he_uniform')(b)
c = k.layers.Dense(128, activation='relu', kernel_regularizer=l1(.1),
bias_regularizer=l1(.1), kernel_initializer='he_uniform')(ovr_in)
c = k.layers.Dropout(.2)(c)
c = k.layers.Dense(64, activation='relu', kernel_regularizer=l1(.1),
bias_regularizer=l1(.1), kernel_initializer='he_uniform')(c)
c = k.layers.Dropout(.2)(c)
c = k.layers.Dense(64, activation='relu', kernel_regularizer=l1(.1),
bias_regularizer=l1(.1), kernel_initializer='he_uniform')(c)
d = k.layers.concatenate([a, b, c])
d = k.layers.Dense(192, activation='relu', kernel_regularizer=l1(.1),
bias_regularizer=l1(.1), kernel_initializer='he_uniform')(d)
d = k.layers.Dropout(.2)(d)
d = k.layers.Dense(64, activation='relu', kernel_regularizer=l1(.1),
bias_regularizer=l1(.1), kernel_initializer='he_uniform')(d)
d = k.layers.Dropout(.2)(d)
d = k.layers.Dense(64, activation='relu', kernel_regularizer=l1(.1),
bias_regularizer=l1(.1), kernel_initializer='he_uniform')(d)
out = k.layers.Dense(1)(d)
model = k.models.Model(inputs=[cat_in, num_in, ovr_in], outputs=out)
model.compile(optimizer=k.optimizers.Adam(learning_rate=.001, beta_1=.999, beta_2=.9999),
loss='mse',
metrics=['mae'])
early_stop = k.callbacks.EarlyStopping(monitor='val_mae', patience=100, restore_best_weights=True)
lr_callback = k.callbacks.ReduceLROnPlateau(monitor='val_mae', factor=.1**.5, patience=20, min_lr=1e-9)
Pps:增加神经元数量并不能改善病情。
答案 0 :(得分:1)
我想给你一个虚拟的例子
n_sample = 100
cat_feat = 30
dense_feat = 50
lstm_timestep = 20
X_cat = np.random.randint(0,2, (n_sample,cat_feat))
X_dense = np.random.uniform(0,1, (n_sample, lstm_timestep, dense_feat))
y = np.random.uniform(0,1, n_sample)
print(X_cat.shape, X_dense.shape)
inp_cat = Input((cat_feat))
x_cat = Dense(64, activation='relu')(inp_cat)
inp_dense = Input((lstm_timestep, dense_feat))
x_dense = LSTM(32, activation='relu')(inp_dense)
concat = Concatenate()([x_cat, x_dense])
x = Dense(32, activation='relu')(concat)
out = Dense(1)(x)
model = Model([inp_cat,inp_dense], out)
model.compile('adam', 'mse')
model.fit([X_cat,X_dense],y, epochs=10)