如何在Keras
中实现这2个Pytorch
模型(受Datacamp课程'Advanced Deep Learning with Keras in Python'的启发):
具有1个输入,2个输出的分类:
from keras.layers import Input, Concatenate, Dense
from keras.models import Model
input_tensor = Input(shape=(1,))
output_tensor = Dense(2)(input_tensor)
model = Model(input_tensor, output_tensor)
model.compile(optimizer='adam', loss='categorical_crossentropy')
X = ... # e.g. a pandas series
y = ... # e.g. a pandas df with 2 columns
model.fit(X, y, epochs=100)
具有分类和回归模型:
from keras.layers import Input, Dense
from keras.models import Model
input_tensor = Input(shape=(1,))
output_tensor_reg = Dense(1)(input_tensor)
output_tensor_class = Dense(1, activation='sigmoid')(output_tensor_reg)
model.compile(loss=['mean_absolute_error','binary_crossentropy']
X = ...
y_reg = ...
y_class = ...
model.fit(X, [y_reg, y_class], epochs=100)
答案 0 :(得分:1)
This ressource特别有用。
从根本上说,与Keras相反,您必须明确说出要在正向函数中计算每个输出的位置以及如何从它们计算出全局损耗。
例如,关于第一个示例:
def __init__(self, ...):
... # define your model elements
def forward(self, x):
# Do your stuff here
...
x1 = F.sigmoid(x) # class probabilities
x2 = F.sigmoid(x) # bounding box calculation
return x1, x2
然后您计算损失:
out1, out2 = model(data)
loss1 = criterion1(out1, target1)
loss2 = criterion2(out2, targt2)
alpha = ... # define the weights of each sub-loss in the global loss
loss = alpha * loss1 + (1-alpha) * loss2
loss.backward()
对于第二个,它几乎是相同的,但是您要计算正向传递中不同点的损耗:
def __init__(self, ...):
... # define your model elements
def forward(self, main_input, aux_input):
aux = F.relu(self.dense_1(main_input))
x = F.relu(self.input_layer(aux))
x = F.sigmoid(x)
return x, aux