CancelledError:[_Derived_] RecvAsync被取消

时间:2019-10-12 22:39:45

标签: tensorflow tensorflow2.0 tf.keras keras-2

我遇到了问题。我在具有CPU和Tensorflow 1.14.0的本地计算机上运行相同的代码。工作正常。但是,当我在使用Tensorflow 2.0的GPU上运行它时,我得到了

CancelledError:  [_Derived_]RecvAsync is cancelled.      [[{{node Adam/Adam/update/AssignSubVariableOp/_65}}]]   [[Reshape_13/_62]] [Op:__inference_distributed_function_3722]

Function call stack: distributed_function

可复制的代码在这里:

import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)

import matplotlib.pyplot as plt
%matplotlib inline

batch_size = 32
num_obs = 100
num_cats = 1 # number of categorical features
n_steps = 10 # number of timesteps in each sample
n_numerical_feats = 18 # number of numerical features in each sample
cat_size = 12 # number of unique categories in each categorical feature
embedding_size = 1 # embedding dimension for each categorical feature

labels =  np.random.random(size=(num_obs*n_steps,1)).reshape(-1,n_steps,1)
print(labels.shape)
#(100, 10, 1)

#3 numerical variable
num_data = np.random.random(size=(num_obs*n_steps,n_numerical_feats))
print(num_data.shape)
#(1000, 3)
#Reshaping numeric features to fit into an LSTM network
features = num_data.reshape(-1,n_steps, n_numerical_feats)
print(features.shape)
#(100, 10, 3)

#one categorical variables with 4 levels
cat_data = np.random.randint(0,cat_size,num_obs*n_steps)
print(cat_data.shape)
#(1000,)
idx = cat_data.reshape(-1, n_steps)
print(idx.shape)
#(100, 10)

numerical_inputs = keras.layers.Input(shape=(n_steps, n_numerical_feats), name='numerical_inputs', dtype='float32')
#<tf.Tensor 'numerical_inputs:0' shape=(?, 10, 36) dtype=float32>

cat_input = keras.layers.Input(shape=(n_steps,), name='cat_input')
#<tf.Tensor 'cat_input:0' shape=(None, 10) dtype=float32>

cat_embedded = keras.layers.Embedding(cat_size, embedding_size, embeddings_initializer='uniform')(cat_input)
#<tf.Tensor 'embedding_1/Identity:0' shape=(None, 10, 1) dtype=float32>

merged = keras.layers.concatenate([numerical_inputs, cat_embedded])
#<tf.Tensor 'concatenate_1/Identity:0' shape=(None, 10, 37) dtype=float32>

lstm_out = keras.layers.LSTM(64, return_sequences=True)(merged)
#<tf.Tensor 'lstm_2/Identity:0' shape=(None, 10, 64) dtype=float32>

Dense_layer1 = keras.layers.Dense(32, activation='relu', use_bias=True)(lstm_out)
#<tf.Tensor 'dense_4/Identity:0' shape=(None, 10, 32) dtype=float32>
Dense_layer2 = keras.layers.Dense(1, activation='linear', use_bias=True)(Dense_layer1 )
#<tf.Tensor 'dense_5/Identity:0' shape=(None, 10, 1) dtype=float32>

model = keras.models.Model(inputs=[numerical_inputs, cat_input], outputs=Dense_layer2)

#compile model
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(loss='mse',
              optimizer=optimizer,
              metrics=['mae', 'mse'])
EPOCHS =5

#fit the model
#you can use input layer names instead
history = model.fit([features, idx], 
                    y = labels,
                    epochs=EPOCHS,
                    batch_size=batch_size)

有人有类似的问题吗?显然这是一个错误,但是我不知道如何解决,因为我想使用Tensorflow 2.0。

2 个答案:

答案 0 :(得分:2)

我发现tensorflow-gpu2.0.0是使用cuda7.6.0编译的。

然后我将cuda从7.4.2更新到7.6.4,问题解决了。

  1. 将cuda更新为7.6.2;
  2. 使用catch强制允许GPU增长。

答案 1 :(得分:1)

我也遇到过类似的问题,这些步骤可能对你在 tf2.0 上的代码有所帮助

  1. 检查 GPU 内存,确保其上没有运行其他任何东西。
  2. 在导入 Keras 或 Tensorflow 之前放置并运行此脚本,重新启动运行时然后首先执行此脚本。
import tensorflow as tf
gpus = tf.config.list_physical_devices('GPU')
gpu = gpus[0]

tf.config.experimental.set_memory_growth(gpu, True)
  1. 如果可能,尝试减小模型大小和批量大小。直到它起作用为止。