使用MobileV2Net进行转移学习

时间:2019-10-08 01:25:14

标签: tensorflow deep-learning conv-neural-network tensorflow-datasets transfer-learning

我正在尝试从https://www.tensorflow.org/tutorials/images/transfer_learning开始使用MobileV2Net进行迁移学习。

以上教程使用MobileV2Net模型作为基本模型,并使用类型为tensorflow.python.data.ops.dataset_ops._OptionsDataset的“ cats_vs_dog”数据集。 以我为例,我想使用MobileV2Net作为基本模型,冻结不同CN层的所有权重,添加一个完全连接的层并对其进行微调。我正在使用的数据集是tiny_imagenet。以下是我的代码:

 ##After pre-processing the data : 
(x_train, y_train), (x_valid, y_valid),(x_test, y_test) = data 

#type(x_train) = numpy.ndarray
#len(x_train) = 1750
##Converting the data to use the pipleine that comes with tf.Data.Dataset
raw_train = tf.data.Dataset.from_tensor_slices((x_train,y_train))
raw_validation = tf.data.Dataset.from_tensor_slices((x_valid, y_valid))
raw_test = tf.data.Dataset.from_tensor_slices((x_test, y_test))
#print(raw_train) gives 
<DatasetV1Adapter shapes: ((64, 64, 3), ()), types: (tf.float64, tf.int64)>

## Now i follow everything from the link (given above in problem statement) : 
IMG_SIZE = 160 # All images will be resized to 160x160

def format_example(image, label):
  image = tf.cast(image, tf.float32)
  image = (image/127.5) - 1
  image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
  return image, label

train = raw_train.map(format_example)
validation = raw_validation.map(format_example)
test = raw_test.map(format_example

#print(train) gives
#<DatasetV1Adapter shapes: ((160, 160, 3), ()), types: (tf.float32, tf.int64)>

train_batches = train.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
validation_batches = validation.batch(BATCH_SIZE)
test_batches = test.batch(BATCH_SIZE)

#print(train_batches) gives : 
<DatasetV1Adapter shapes: ((?, 160, 160, 3), (?,)), types: (tf.float32, tf.int64)>
##The corresponding command in the tutorial (which works on cats vs dogs dataset gives) :
<BatchDataset shapes: ((None, 160, 160, 3), (None,)), types: (tf.float32, tf.int64)>

我还尝试使用padded_batch()而不是batch(),但以下内容仍会进入无限循环。


##Goes to infinite loop
for image_batch, label_batch in train_batches.take(1):
  print("hello")
  pass
image_batch.shape ## Does not reach here 

##The same command in the tutorial gives :
hello
TensorShape([32, 160, 160, 3])

##Further in my case : 
#print(train_batches.take(1)) gives 
<DatasetV1Adapter shapes: ((?, 160, 160, 3), (?,)), types: (tf.float32, tf.int64)>
##In tutorial it gives : 
<TakeDataset shapes: ((None, 160, 160, 3), (None,)), types: (tf.float32, tf.int64)>

image_batch在代码的后面使用。

##Load the pre trained Model : 
IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
                                               include_top=False,
                                               weights='imagenet')
##This feature extractor converts each 160x160x3 image to a 5x5x1280 block of features. See what ##it does to the example batch of images:
feature_batch = base_model(image_batch)
print(feature_batch.shape) ## ((32, 5, 5, 1280))

##Freezing the convolution base 
base_model.trainable = False

##Adding a classification head : 
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_average = global_average_layer(feature_batch)
print(feature_batch_average.shape) ## (32, 1280)

prediction_layer = keras.layers.Dense(1)
prediction_batch = prediction_layer(feature_batch_average)
print(prediction_batch.shape) ##(32, 1)

model = tf.keras.Sequential([
  base_model,
  global_average_layer,
  prediction_layer
])

我从来没有对tensorflow进行过多的工作,任何想法如何使其工作?

1 个答案:

答案 0 :(得分:0)

this.state.profiles.map(item => { return( <div>{"Name: " + item.name + " description: " + item.description}</div> ) }) Padded batch:如果数据集中的元素具有不同的形状,则使用padded batch,而batch要求其中的元素应具有相同的形状。

您的代码存在的问题是您没有遇到描述中的无限循环。您使用的数据集是一个很小的imagenet,它包含100,000个图像,并且一次迭代遍历所有图像都需要时间。如果您不想等待那么长时间,可以在batch循环内将pass更改为break,它将在第一次迭代后退出循环。

还有另一种称为repeat的操作。这用于重复数据集的次数,即您在其count参数内指定的次数。但是,如果将其设置为-1,则数据集将继续循环,在这种情况下,您的数据集将进入无限循环。