Google Colab GPU 不可用(tensorflow 和 keras tf-models 错误)

时间:2021-05-20 20:22:12

标签: python tensorflow keras google-colaboratory

几天前,我使用 Google Colab Pro 编写了一个用于文本分类的 BERT 模型。一切正常,但从昨天开始,我总是得到输出“GPU 不可用”。 我没有改变任何东西,但注意到在安装 tensorflow_hub 和 keras tf-models 时会发生错误。之前没有任何错误。

! python --version
!pip install tensorflow_hub
!pip install keras tf-models-official pydot graphviz

我收到这条消息:

错误:tensorflow 2.5.0 有要求 h5py~=3.1.0,但你会得到不兼容的 h5py 2.10.0。

错误:tf-models-official 2.5.0 要求 pyyaml>=5.1,但您将拥有不兼容的 pyyaml 3.13。

import os

import numpy as np
import pandas as pd

import tensorflow as tf
import tensorflow_hub as hub

from keras.utils import np_utils

import official.nlp.bert.bert_models
import official.nlp.bert.configs
import official.nlp.bert.run_classifier
import official.nlp.bert.tokenization as tokenization

from official.modeling import tf_utils
from official import nlp
from official.nlp import bert

from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder

import matplotlib.pyplot as plt
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  try:
    for gpu in gpus:
      tf.config.experimental.set_memory_growth(gpu, True)
    logical_gpus = tf.config.experimental.list_logical_devices('GPU')
    print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
  except RuntimeError as e:
    print(e)

print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")

输出 版本:2.5.0 渴望模式:真 集线器版本:0.12.0 GPU 不可用

如果有人能帮助我,我将不胜感激。

ps.:我已经尝试更新 h5py 和 PyYAML,但 GPU 仍未运行。

! pip install h5py==3.1.0
! pip install PyYAML==5.1.2

1 个答案:

答案 0 :(得分:1)

<块引用>

错误:tf-models-official 2.5.0 要求 pyyaml>=5.1,但是 您将拥有不兼容的 pyyaml 3.13。

我能够通过在安装 pip 之前升级 tf-models-official 软件包来解决上述问题,如下所示

!pip install --upgrade pip
!pip install keras tf-models-official pydot graphviz

工作代码如下图

import os

import numpy as np
import pandas as pd

import tensorflow as tf
import tensorflow_hub as hub

from keras.utils import np_utils

import official.nlp.bert.bert_models
import official.nlp.bert.configs
import official.nlp.bert.run_classifier
import official.nlp.bert.tokenization as tokenization

from official.modeling import tf_utils
from official import nlp
from official.nlp import bert

from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder

import matplotlib.pyplot as plt
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  try:
    for gpu in gpus:
      tf.config.experimental.set_memory_growth(gpu, True)
    logical_gpus = tf.config.experimental.list_logical_devices('GPU')
    print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
  except RuntimeError as e:
    print(e)

print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")

输出:

1 Physical GPUs, 1 Logical GPUs
Version:  2.5.0
Eager mode:  True
Hub version:  0.12.0
GPU is available