我是深度学习和stackoverflow的新手。
我正在尝试为我的图像制作简单的编码器/解码器,该图像非常大(192 * 288)。 这是我的尝试:
input_img = Input(shape=(55296,))
encoded = Dense(units=13824, activation='relu')(input_img)
decoded = Dense(units=55296, activation='relu')(encoded)
但是不知道为什么我总是收到此错误:
ResourceExhaustedError: OOM when allocating tensor with shape[55296,13824] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:Add]
我知道这是因为我的GPU的张量更大。但这不适用于如此简单的体系结构吗?我正在为此使用google colab,当我从GPU更改为TPU时,它可以工作,但是稍后当我尝试拟合模型时,那里的内存也将耗尽。请帮助我。
编辑:
我尝试使用CNN,这是架构:
input_img = Input(shape=(192, 288, 1))
encode1 = Conv2D(64, (3, 3), activation='relu', padding='same')(input_img)
encode2 = MaxPooling2D((2, 2), padding='same')(encode1)
encode3 = Conv2D(32, (3, 3), activation='relu', padding='same')(encode2)
encode4 = MaxPooling2D((2, 2), padding='same')(encode3)
encode5 = Conv2D(64, (3, 3), activation='relu', padding='same')(encode4)
l = Flatten()(encode5)
l = Dense(3456, activation='relu')(l)
l = Dense(100, activation='relu')(l)
#DECODER
d = Dense(3456, activation='relu')(l)
d = Reshape((48,72,1))(d)
decode1 = Conv2D(32, (3, 3), activation='relu', padding='same')(d)
decode2 = UpSampling2D((2, 2))(decode1)
decode3 = Conv2D(32, (3, 3), activation='relu', padding='same')(decode2)
decode4 = UpSampling2D((2, 2))(decode3)
decode5 = Conv2D(64, (3, 3), activation='relu', padding='same')(decode4)
model = models.Model(input_img, decode5)
但是在训练时,这东西也用光了。有人可以指导我如何以更少的权重实现更好的架构,或者我需要更好的系统吗?