我正在使用Python 3.7.7,Tensorflow 2.1.0和Keras 2.3.1学习如何使用元学习。
我正在使用2向5镜头学习。
作为特征提取器,我正在使用VGG-16 conv1到conv5层:
def vgg16_encoder(input_size = (200,200,1)):
inputs = Input(input_size, name = 'input')
conv1 = Conv2D(64, (3, 3), activation = 'relu', padding = 'same', name ='conv1_1')(inputs)
conv1 = Conv2D(64, (3, 3), activation = 'relu', padding = 'same', name ='conv1_2')(conv1)
pool1 = MaxPooling2D(pool_size = (2,2), strides = (2,2), name = 'pool_1')(conv1)
conv2 = Conv2D(128, (3, 3), activation = 'relu', padding = 'same', name ='conv2_1')(pool1)
conv2 = Conv2D(128, (3, 3), activation = 'relu', padding = 'same', name ='conv2_2')(conv2)
pool2 = MaxPooling2D(pool_size = (2,2), strides = (2,2), name = 'pool_2')(conv2)
conv3 = Conv2D(256, (3, 3), activation = 'relu', padding = 'same', name ='conv3_1')(pool2)
conv3 = Conv2D(256, (3, 3), activation = 'relu', padding = 'same', name ='conv3_2')(conv3)
conv3 = Conv2D(256, (3, 3), activation = 'relu', padding = 'same', name ='conv3_3')(conv3)
pool3 = MaxPooling2D(pool_size = (2,2), strides = (2,2), name = 'pool_3')(conv3)
conv4 = Conv2D(512, (3, 3), activation = 'relu', padding = 'same', name ='conv4_1')(pool3)
conv4 = Conv2D(512, (3, 3), activation = 'relu', padding = 'same', name ='conv4_2')(conv4)
conv4 = Conv2D(512, (3, 3), activation = 'relu', padding = 'same', name ='conv4_3')(conv4)
pool4 = MaxPooling2D(pool_size = (2,2), strides = (2,2), name = 'pool_4')(conv4)
conv5 = Conv2D(512, (3, 3), activation = 'relu', padding = 'same', name ='conv5_1')(pool4)
conv5 = Conv2D(512, (3, 3), activation = 'relu', padding = 'same', name ='conv5_2')(conv5)
conv5 = Conv2D(512, (3, 3), activation = 'relu', padding = 'same', name ='conv5_3')(conv5)
pool5 = MaxPooling2D(pool_size = (2,2), strides = (2,2), name = 'pool_5')(conv5)
opt = Adam(lr=0.001)
model = Model(inputs = inputs, outputs = pool5, name = 'vgg-16_encoder')
model.compile(optimizer=opt, loss=keras.losses.categorical_crossentropy, metrics=['accuracy'])
return model
我的数据集是一个形状为(960, 2, 200, 200, 1)
的Numpy数组:960对200x200x1的图像。这对是(X,Y)。我正在尝试进行语义分割。
这是我的算法
# 1. Let's say we have the dataset, D, comprising
# {(x1, y1), (x2, y2), ... (xn, yn)} where x is the feature and y is the
# class label.
# ------------------------------------------------------------------------------
# Merge all images into one dataset.
dataset = []
# !!!! Removed for brevity !!!!
# Convert it into a Numpy Array.
D = np.array(dataset)
# 2. Since we perform episodic training, we randomly sample n number of data
# points per each class from our dataset, D, and prepare our support set, S.
# ------------------------------------------------------------------------------
#
# 3. Similarly, we select n number of data points and prepare our query set, Q.
# ------------------------------------------------------------------------------
# Number of pairs in dataset.
no_of_samples = D.shape[0]
for epoch in range(num_epochs):
for episode in range(num_episodes):
selected = np.random.permutation(no_of_samples)[:num_shot + num_query]
# Create our Support Set.
support_set = np.array(D[selected[:num_shot]])
# Create our Query Set.
query_set = np.array(D[selected[num_query:]])
# 4. We learn the embeddings of the data points in our support set using
# our embedding function, f∅ (). The embedding function can be any
# feature extractor—say, a convolutional network for images and an LSTM
# network for text.
现在,我不知道如何继续。
在另一个示例中,使用U-NET,我有以下代码:
img_shape = (rows_standard, cols_standard, 1)
model = utils.get_unet(img_shape)
model.summary()
from sklearn.model_selection import train_test_split
# Split train and valid
X_train, X_valid, y_train, y_valid = train_test_split(x_images, y_images, test_size=0.1, random_state=42)
results = model.fit(X_train, y_train, batch_size=32, epochs=50,\
validation_data=(X_valid, y_valid))
我认为我必须这样做,使用Support Set
分割train_test_split
,但是我不确定。
如何使用Support Set
和Query Set
来训练特征提取器?