使用 Tensorflow 问题进行迁移学习

时间:2021-05-28 12:23:51

标签: python tensorflow transfer-learning

我正在尝试解决深度学习课程的问题,我必须修改的代码块如下

def alpaca_model(image_shape=IMG_SIZE, data_augmentation=data_augmenter()):
    """ Define a tf.keras model for binary classification out of the MobileNetV2 model
    Arguments:
        image_shape -- Image width and height
        data_augmentation -- data augmentation function
    Returns:
        tf.keras.model
    """
    
    
    input_shape = image_shape + (3,)
    
    # START CODE HERE


    base_model=tf.keras.applications.MobileNetV2(input_shape=input_shape, include_top=False, weights="imagenet")

    # Freeze the base model by making it non trainable
    base_model.trainable = None 

    # create the input layer (Same as the imageNetv2 input size)
    inputs = tf.keras.Input(shape=None) 
    
    # apply data augmentation to the inputs
    x = None
    
    # data preprocessing using the same weights the model was trained on
    x = preprocess_input(None) 
    
    # set training to False to avoid keeping track of statistics in the batch norm layer
    x = base_model(None, training=None) 
    
    # Add the new Binary classification layers
    # use global avg pooling to summarize the info in each channel
    x = None()(x) 
    #include dropout with probability of 0.2 to avoid overfitting
    x = None(None)(x)
        
    # create a prediction layer with one neuron (as a classifier only needs one)
    prediction_layer = None
    
    # END CODE HERE
    
    outputs = prediction_layer(x) 
    model = tf.keras.Model(inputs, outputs)
    
    return model

IMG_SIZE = (160, 160)
def data_augmentation():
    data = tl.keras.Sequential()
    data.add(RandomFlip("horizontal")
    data.add(RandomRotation(0.2)
    return data

我按照指示从该模板开始尝试了 3 次,并进行了大量反复试验。我不知道我错过了什么。我已经达到了训练模型的程度,我可以得到它的摘要,但摘要不正确。

请帮忙,我快疯了,试图弄清楚这一点。我知道这非常简单,但正是这些简单的问题让我感到困惑。

3 个答案:

答案 0 :(得分:0)

您可能需要使用以下代码来运行您的算法。

input_shape = image_shape + (3,)

### START CODE HERE

base_model = tf.keras.applications.MobileNetV2(input_shape=input_shape,
                                               include_top=False, # <== Important!!!!
                                               weights='imagenet') # From imageNet

# Freeze the base model by making it non trainable
base_model.trainable = False 

# create the input layer (Same as the imageNetv2 input size)
inputs = tf.keras.Input(shape=input_shape) 

# apply data augmentation to the inputs
x = data_augmentation(inputs)

# data preprocessing using the same weights the model was trained on
x = preprocess_input(x) 

# set training to False to avoid keeping track of statistics in the batch norm layer
x = base_model(x, training=False) 

# Add the new Binary classification layers
# use global avg pooling to summarize the info in each channel
x = tf.keras.layers.GlobalAveragePooling2D()(x)
#include dropout with probability of 0.2 to avoid overfitting
x = tf.keras.layers.Dropout(0.2)(x)
    
# create a prediction layer with one neuron (as a classifier only needs one)
prediction_layer = tf.keras.layers.Dense(1 ,activation='linear')(x)

### END CODE HERE

outputs = prediction_layer
model = tf.keras.Model(inputs, outputs)

答案 1 :(得分:0)

我遇到了同样的问题,但我的错误是在最后将 (x) 放在密集层中,这是对我有用的代码:

def alpaca_model(image_shape=IMG_SIZE, data_augmentation=data_augmenter()):
''' Define a tf.keras model for binary classification out of the MobileNetV2 model
Arguments:
    image_shape -- Image width and height
    data_augmentation -- data augmentation function
Returns:
Returns:
    tf.keras.model
'''


input_shape = image_shape + (3,)

### START CODE HERE

base_model = tf.keras.applications.MobileNetV2(input_shape=input_shape,
                                               include_top=False, # <== Important!!!!
                                               weights='imagenet') # From imageNet

# Freeze the base model by making it non trainable
base_model.trainable = False 

# create the input layer (Same as the imageNetv2 input size)
inputs = tf.keras.Input(shape=input_shape) 

# apply data augmentation to the inputs
x = data_augmentation(inputs)

# data preprocessing using the same weights the model was trained on
x = preprocess_input(x) 

# set training to False to avoid keeping track of statistics in the batch norm layer
x = base_model(x, training=False) 

# Add the new Binary classification layers
# use global avg pooling to summarize the info in each channel
x = tfl.GlobalAveragePooling2D()(x) 
#include dropout with probability of 0.2 to avoid overfitting
x = tfl.Dropout(0.2)(x)
    
# create a prediction layer with one neuron (as a classifier only needs one)
prediction_layer = tfl.Dense(1, activation = 'linear')

### END CODE HERE

outputs = prediction_layer(x) 
model = tf.keras.Model(inputs, outputs)

return model

答案 2 :(得分:0)

在 def 数据增强下,你的括号没有很好地闭合