我正在尝试张量流,有人可以解释以下代码块吗?

时间:2020-11-04 19:18:49

标签: tensorflow2.0

在绘制模型每一层的输出之前,在方法“ visualize_convolutions”中,已进行了一些处理,如下所示-
功能-= features.mean()
功能/ = features.std()
功能* = 64
功能+ = 128
功能= np.clip(功能,0,255).astype('uint8')
谁能解释为什么需要这些行?
附言我只添加了与问题相关的功能/块。希望我什么都没错过。

from tensorflow import keras
from keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator

imgGenerator = ImageDataGenerator(rescale=1./255)
train_images = imgGenerator.flow_from_directory(
    '/tmp/horse-or-human/',
    target_size=(300, 300),
    batch_size=128,
    class_mode='binary')

model = keras.Sequential([
                      keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(300, 300, 3)),
                      keras.layers.MaxPooling2D(2, 2),
                      keras.layers.Conv2D(32, (3, 3), activation='relu'),
                      keras.layers.MaxPooling2D(2, 2),
                      keras.layers.Conv2D(64, (3, 3), activation='relu'),
                      keras.layers.MaxPooling2D(2, 2),
                      keras.layers.Conv2D(64, (3, 3), activation='relu'),
                      keras.layers.MaxPooling2D(2, 2),
                      keras.layers.Conv2D(64, (3, 3), activation='relu'),
                      keras.layers.MaxPooling2D(2, 2),
                      keras.layers.Flatten(),
                      keras.layers.Dense(512, activation='relu'),
                      keras.layers.Dense(1, activation='sigmoid')
])

model.compile(
    optimizer=RMSprop(0.001),
    loss='binary_crossentropy',
    metrics=['accuracy']
  )

model.fit(
    train_images,
    steps_per_epoch=8,
    epochs=15,
    callbacks=[callback],
    verbose=1
)

visualization_model = keras.models.Model(inputs=model.inputs, outputs=[layer.output for layer in model.layers])

def visualize_convolutions(img):
  intermediary_features = visualization_model.predict(img)
  for layer, features in zip([layer.name for layer in model.layers], intermediary_features):
    if len(features.shape) == 4:
      filter_size = features.shape[1]
      filters = features.shape[-1]
      layer_viz = np.zeros((filter_size, filter_size * filters))
      for i in range(filters):
        features -= features.mean()
        features /= features.std()
        features *= 64
        features += 128
        features = np.clip(features, 0, 255).astype('uint8')
        layer_viz[:, i * filter_size : (i+1) * filter_size] = features[0, :, :, i]
      plt.figure(figsize=(20., 20. / filters))
      plt.imshow(layer_viz, aspect='auto')
      plt.title(layer)
      plt.show()


img = image.img_to_array(image.load_img('/tmp/horse-or-human/'+random.choice(imgs)))
img = img.reshape((1,)+img.shape)
visualize_convolutions(img)

0 个答案:

没有答案