用张量流进行图像分割的卷积神经网络

时间:2021-05-08 12:52:54

标签: python tensorflow deep-learning conv-neural-network image-segmentation

我正在尝试使用 Tensorflow 制作我的第一个神经网络。我有一些医学图像,我的目标是对它们进行分割。我找不到我做错了什么。这是错误:

2021-05-08 14:33:15.249134: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
Epoch 1/50
Traceback (most recent call last):
  File "C:/Users/tompi/PycharmProjects/ProjetDeepLearning/test.py", line 185, in <module>
    history = model.fit(X_train, Y_train, epochs=epochs,
  File "C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1100, in fit
    tmp_logs = self.train_function(iterator)
  File "C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\eager\def_function.py", line 828, in __call__
    result = self._call(*args, **kwds)
  File "C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\eager\def_function.py", line 871, in _call
    self._initialize(args, kwds, add_initializers_to=initializers)
  File "C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\eager\def_function.py", line 725, in _initialize
    self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
  File "C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\eager\function.py", line 2969, in _get_concrete_function_internal_garbage_collected
    graph_function, _ = self._maybe_define_function(args, kwargs)
  File "C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\eager\function.py", line 3361, in _maybe_define_function
    graph_function = self._create_graph_function(args, kwargs)
  File "C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\eager\function.py", line 3196, in _create_graph_function
    func_graph_module.func_graph_from_py_func(
  File "C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\framework\func_graph.py", line 990, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\eager\def_function.py", line 634, in wrapped_fn
    out = weak_wrapped_fn().__wrapped__(*args, **kwds)
  File "C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\framework\func_graph.py", line 977, in wrapper
    raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

    C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\keras\engine\training.py:805 train_function  *
        return step_function(self, iterator)
    C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\keras\engine\training.py:795 step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:1259 run
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2730 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:3417 _call_for_each_replica
        return fn(*args, **kwargs)
    C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\keras\engine\training.py:788 run_step  **
        outputs = model.train_step(data)
    C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\keras\engine\training.py:754 train_step
        y_pred = self(x, training=True)
    C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\keras\engine\base_layer.py:998 __call__
        input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
    C:\Users\tompi\anaconda3\envs\tf2.4\lib\site-packages\tensorflow\python\keras\engine\input_spec.py:204 assert_input_compatibility
        raise ValueError('Layer ' + layer_name + ' expects ' +

    ValueError: Layer sequential expects 1 input(s), but it received 44 input tensors. Inputs received: ...

在我的代码下面:

import tensorflow as tf
import pandas as pd
import numpy as np
import tensorflow.keras
import segmentation_models as sm
import os
import cv2
import Metrics as metrics # a python file
import matplotlib.pyplot as plt
from tensorflow.keras import datasets, layers, models
from sklearn.model_selection import train_test_split

width = 672
height = 448
dataframe = []

def normalize(path):
    image = cv2.imread(path)
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    image = cv2.resize(image, (width, height))
    # newSize = np.zeros((height, width, 3))
    # newSize[:, :, 0] = image[:, :]
    # newSize[:, :, 1] = image[:, :]
    # newSize[:, :, 2] = image[:, :]
    return image

def createDataset():
    for folder in os.listdir(imagesPath):
        for imageName in os.listdir(imagesPath + folder):
            image = normalize(imagesPath + folder + "/" + imageName)
            dataframe.append([folder, imageName, image])

createDataset()
df = pd.DataFrame(dataframe, columns=['Folder', 'Name', 'Image'])

def getImagesFromFolder(folder):
    L = []
    n, p = np.shape(df)
    for i in range(n):
        if df['Folder'][i] == folder:
            L.append(df.iloc[i][2])
    return L

originalImages = getImagesFromFolder('Original')
maskImages = getImagesFromFolder('Mask')

X_train, X_test, Y_train, Y_test = train_test_split(originalImages, maskImages, train_size=0.8, random_state=42)

classes = 3
activation = "softmax"
lr = 0.0001
loss = sm.losses.jaccard_loss
metrics = training_metrics = [
    sm.metrics.IOUScore(threshold=0.5),
    sm.metrics.FScore(threshold=0.5),
    sm.metrics.Precision(),
    sm.metrics.Recall(),
    metrics.dice_coef
]
batch_size = 3
epochs = 50
callbacks = [tensorflow.keras.callbacks.ReduceLROnPlateau()]

我使用的是一个非常简单的 Unet :

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(height, width, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))

model.summary()

model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))

model.summary()

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

history = model.fit(X_train, Y_train, epochs=epochs,
                    validation_data=(X_test, Y_test))

错误表示输入接收 44 个张量,这是 X_train 和 Y_train (44, 448, 672, 3) 中的图像数量,但我不知道我做错了什么,我看到了几个具有相同形状的帖子并且它有效.谁能帮助我们。这将不胜感激。 谢谢。

1 个答案:

答案 0 :(得分:0)

我发现了错误所在。我的变量 X_train, Y_train, X_test, Y_test 的类型是 list 而不是 numpy.ndarray 因为我的函数 getImagesFromFolder .我必须返回 np.array(L) 才能运行。