UserWarning:将稀疏的IndexedSlices转换为形状未知的密集Tensor。这可能会消耗大量内存

时间:2018-11-17 11:43:10

标签: python tensorflow keras nvidia cudnn

这是我的模特。

# Import Keras 
import tensorflow as tf
from tensorflow.python.keras.layers import Conv2D, MaxPooling2D, Flatten,GlobalMaxPool2D
from tensorflow.python.keras.layers import Input, LSTM, Embedding, Dense
from tensorflow.python.keras.models import Model, Sequential
from tensorflow.python import keras
# Define CNN for Image Input
vision_model = Sequential()
vision_model.add(Conv2D(64, (3, 3), activation='relu', padding='same', input_shape=(120, 160, 3)))
vision_model.add(Conv2D(64, (3, 3), activation='relu'))
vision_model.add(MaxPooling2D((2, 2)))
vision_model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
vision_model.add(Conv2D(128, (3, 3), activation='relu'))
vision_model.add(MaxPooling2D((2, 2)))
vision_model.add(Conv2D(196, (3, 3), activation='relu', padding='same'))
vision_model.add(Conv2D(196, (3, 3), activation='relu'))
vision_model.add(Conv2D(196, (3, 3), activation='relu'))
vision_model.add(MaxPooling2D((2, 2)))
vision_model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
vision_model.add(Conv2D(256, (3, 3), activation='relu'))
vision_model.add(Conv2D(256, (3, 3), activation='relu'))
vision_model.add(MaxPooling2D((2, 2)))
vision_model.add(Conv2D(384, (3, 3), activation='relu', padding='same'))
vision_model.add(Conv2D(384, (3, 3), activation='relu'))
vision_model.add(GlobalMaxPool2D())
vision_model.summary()
image_input = Input(shape=(120, 160, 3))
encoded_image = vision_model(image_input)

# Define RNN for language input
question_input = Input(shape=(42,), dtype='int32')
embedded_question = Embedding(input_dim=500, output_dim=256, input_length=100)(question_input)
encoded_question = LSTM(256,return_sequences=True)(embedded_question)
encoded_question = LSTM(256)(embedded_question)

# Combine CNN and RNN to create the final model
merged = keras.layers.concatenate([encoded_question, encoded_image])
output = Dense(26, activation='softmax')(merged)
vqa_model = Model(inputs=[image_input, question_input], outputs=output)
vqa_model.summary()

。 。 。

vqa_model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])
vqa_model.fit_generator(train_gen,len(train_data[0])//16,validation_data=val_gen,validation_steps=split//16,verbose=1,epochs=100)

运行此命令后出现此错误

  

UnknownError:无法获取卷积算法。这可能是因为cuDNN初始化失败,所以请尝试查看上面是否打印了警告日志消息。        [[{{节点顺序/ conv2d / Conv2D}} = Conv2D [T = DT_FLOAT,_class = [“ loc:@ training / Adam / gradients / sequential / conv2d / Conv2D_grad / Conv2DBackpropFilter”],data_format =“ NCHW”,膨胀= [1,1,1,1],padding =“ SAME”,步幅= [1,1,1,1],use_cudnn_on_gpu = true,_device =“ / job:localhost /副本:0 /任务:0 /设备: GPU:0“](训练/亚当/梯度/顺序/ conv2d / Conv2D_grad / Conv2DBackpropFilter-0-TransposeNHWCToNCHW-LayoutOptimizer,顺序/ conv2d / Conv2D / ReadVariableOp)]]        [[{{node ConstantFoldingCtrl / loss / dense_loss / broadcast_weights / assert_broadcastable / AssertGuard / Switch_0 / _316}} = _Recvclient_terminated = false,recv_device =“ / job:localhost /副本0 /任务:0 /设备:CPU:0”, send_device =“ / job:localhost /副本:0 / task:0 / device:GPU:0”,send_device_incarnation = 1,tensor_name =“ edge_1886 _... d / Switch_0”,tensor_type = DT_FLOAT,_device =“ / job:localhost / replica:0 / task:0 / device:CPU:0“]]    “

我在使用NVIDIA 1050Ti的本地系统上使用tensorflow gpu。

我试图搜索解决方案,但是没有用。

0 个答案:

没有答案