卷积GAN中的Keras Conv2DTranspose层

时间:2020-07-01 12:15:36

标签: tensorflow keras conv-neural-network keras-layer generative-adversarial-network

我正在尝试在Keras中训练带有Tensorflow后端的卷积GAN以生成人脸。阅读了几个示例之后,似乎有两种构建生成器的方法,您可以使用带有大步的Conv2DTranspose层进行升采样,如下所示:

class MyClass extends JsonElement {
    private JsonArray array;

    public MyClass(String name, String role, String title) {
        this.array = new JsonArray();
        this.array.add(Objects.requireNonNull(name));
        this.array.add(Objects.requireNonNull(role));
        this.array.add(Objects.requireNonNull(title));
    }

    public MyClass(JsonArray jsonArray) {
        this.array = jsonArray;
    }

    @Override
    public JsonElement deepCopy() {
        return array.deepCopy();
    }

    @Override
    public boolean isJsonArray() {
        return true;
    }

    @Override
    public JsonArray getAsJsonArray() {
        return array;
    }

}

public class Main {
    public static void main(String[] args) {
        Gson gson = new Gson();
        MyClass obj = new MyClass("ab", "cd", "ef");
        String serialized = gson.toJson(obj);

        System.out.println(serialized); // "["ab", "cd", "ef"]"
        MyClass obj2 = new MyClass(JsonParser.parseString(serialized).getAsJsonArray());
        System.out.println(obj2.getAsJsonArray()); //["ab","cd","ef"]
    }
}

或将Upsample2D图层与Conv2D图层一起使用,如下所示:

def build_generator(seed_size, channels):
    inputs_rand = Input(shape=(seed_size,))
    inputs_feat = Input(shape=(NUM_FEATS,))
    inputs = Concatenate()([inputs_rand, inputs_feat])

    dense1 = Dense(4*4*64, activation='relu')(inputs)
    reshape1 = Reshape((4,4,64))(dense1)

    conv_trans1 = Conv2DTranspose(64, kernel_size=5, strides=2*GENERATE_RES, padding='same')(reshape1)
    batch_norm1 = BatchNormalization(momentum=0.8)(conv_trans1)
    leaky_relu1 = ReLU()(batch_norm1)

    conv_trans2 = Conv2DTranspose(64, kernel_size=5, strides=2, padding='same')(leaky_relu1)
    batch_norm2 = BatchNormalization(momentum=0.8)(conv_trans2)
    leaky_relu2 = ReLU()(batch_norm2)

    conv_trans3 = Conv2DTranspose(64, kernel_size=5, strides=2, padding='same')(leaky_relu2)
    batch_norm3 = BatchNormalization(momentum=0.8)(conv_trans3)
    leaky_relu3 = ReLU()(batch_norm3)

    output = Conv2DTranspose(channels, kernel_size=3, padding='same', activation='tanh')(leaky_relu3)

    generator = Model(inputs=[inputs_rand, inputs_feat], outputs=[output, inputs_feat])
    return generator

我在一些地方读到Conv2DTranspose更可取,但我似乎无法使其正常工作。它只会根据步幅产生重复的噪音模式,然后不管我将其训练多久,它都保持不变。同时,另一种方法似乎效果很好,但是我想让两种方法都起作用(只是为了满足我的好奇心)。我想我一定做错了什么,但是我的代码看起来与我发现的其他示例几乎相同,但是找不到其他人遇到这种问题。

我已经对模型进行了一些调整,例如,添加了辍学并删除了批处理规范化,以防万一有一个简单的修复程序,但似乎无济于事。我没有包括代码的其余部分以使工作保持整洁,但是如果可以帮助的话,我可以添加其余部分。

This is the noise obtained when using the Conv2DTranspose layers.

Meanwhile the Upsampling with Conv2D layers produce these, for example.

任何关于如何改善我的成绩的评论和建议也将受到欢迎。

0 个答案:

没有答案
相关问题