使用 Inception Resnet v2(乳腺癌)进行迁移学习,准确率低

时间:2021-07-19 21:52:49

标签: python tensorflow keras deep-learning conv-neural-network

我想使用迁移学习和 Inception Resnet v2 对 BreakHis 数据集 (https://www.kaggle.com/ambarish/breakhis) 中的乳腺癌组织病理学图像进行二进制分类。目标是通过向模型添加两个神经元来冻结所有层并训练全连接层。特别是,最初我想考虑与放大系数 40X(良性:625,恶性:1370)相关的图像。以下是我所做的总结:

  • 我阅读了图片并将其调整为 150x150
  • 我将数据集划分为训练集、验证集和测试集
  • 我加载了预训练的网络 Inception Resnet v2
  • 我冻结了所有层,我为二进制添加了两个神经元 分类(1 =“良性”,0 =“恶性”)
  • 我使用 Adam 方法作为激活函数来编译模型
  • 我进行培训
  • 我做出预测
  • 我计算准确度

这是代码:

data = dataset[dataset["Magnificant"]=="40X"]
def preprocessing(dataset, img_size):
    # images
    X = []
    # labels 
    y = []
    
    i = 0
    for image in list(dataset["Path"]):
        # Ridimensiono e leggo le immagini
        X.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR), 
                            (img_size, img_size), interpolation=cv2.INTER_CUBIC))
        basename = os.path.basename(image)
        
        # Get labels
        if dataset.loc[i][2] == "benign":
            y.append(1)
        else:
            y.append(0)
        i = i+1
    return X, y

X, y = preprocessing(data, 150)
X = np.array(X)
y = np.array(y)
# Splitting
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, stratify = y_40, shuffle=True, random_state=1)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, stratify = y_train, shuffle=True, random_state=1) 

conv_base = InceptionResNetV2(weights='imagenet', include_top=False, input_shape=[150, 150, 3])   

# Freezing
for layer in conv_base.layers:
    layer.trainable = False

model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid'))

opt = tf.keras.optimizers.Adam(learning_rate=0.0002)

loss = tf.keras.losses.BinaryCrossentropy(from_logits=False)

model.compile(loss=loss, optimizer=opt, metrics = ["accuracy", tf.metrics.AUC()])

batch_size = 32

train_datagen = ImageDataGenerator(rescale=1./255)
val_datagen = ImageDataGenerator(rescale=1./255) 
train_generator = train_datagen.flow(X_train, y_train, batch_size=batch_size) 
val_generator = val_datagen.flow(X_val, y_val, batch_size=batch_size)

callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)

ntrain =len(X_train)
nval = len(X_val)
len(y_train)
epochs = 70
history = model.fit_generator(train_generator,
                              steps_per_epoch=ntrain // batch_size,
                              epochs=epochs,
                              validation_data=val_generator,
                              validation_steps=nval // batch_size, callbacks=[callback])

这是最后一个epoch的训练输出:

Epoch 70/70
32/32 [==============================] - 3s 84ms/step - loss: 0.0499 - accuracy: 0.9903 - auc_5: 0.9996 - val_loss: 0.5661 - val_accuracy: 0.8250 - val_auc_5: 0.8521

我做出预测:

test_datagen = ImageDataGenerator(rescale=1./255) 
x = X_test
y_pred = model.predict(test_datagen.flow(x))

y_p = []
for i in range(len(y_pred)):
    if y_pred[i] > 0.5:
        y_p.append(1)
    else:
        y_p.append(0)

我计算准确度:

from sklearn.metrics import accuracy_score
accuracy =  accuracy_score(y_test, y_p)
print(accuracy)

这是我得到的准确度值:0.5459098497495827

为什么我的准确率这么低,我做了几次测试,但总是得到相似的结果?(帮助我)

1 个答案:

答案 0 :(得分:0)

在进行迁移学习时,尤其是使用冻结权重时,执行与最初训练网络时使用的相同的预处理非常重要。

对于 InceptionResNetV2 网络,tensorflow / keras 库中的预处理类型为 "tf",对应于 imagenet 权重的 dividing by 127 then subtracting 1。而是除以 255。

幸运的是,您不必费力地通过代码来找出使用了什么函数,因为它们已在 API 中公开。简单做

train_datagen = ImageDataGenerator(preprocessing_function=tf.keras.applications.inception_resnet_v2.preprocess_input)

等等用于验证和测试