我将张量流更新为 2.2.0 并相应地
nvcc - 11.0
cudnn - 11.0
GPU - GTX 1050 ti
运行以下代码时
print(device_lib.list_local_devices())
我面临以下输出
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 12436950237915670665
, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 11900640710651469327
physical_device_desc: "device: XLA_CPU device"
, name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 6061376473165052950
physical_device_desc: "device: XLA_GPU device"
]
在任何地方都看不到 gtx 1050ti ,尽管我可以看到设备0提到的GPU,这可能意味着 intel 的内置GPU。
在张量流和Cuda方面, 1050ti 的兼容版本是什么?
更新
我尝试了以下命令
print(tf.config.list_physical_devices('GPU'))
,结果为空。这是否表示未检测到GPU?
答案 0 :(得分:0)
此link提供有关Cuda,tensorflow和Cudnn兼容版本的信息,以检测gpu。对于版本 tensorflow 2.2.0 , Cudnn7.4 和 Cuda 10.1 , python 3.8 ,我能够得到以下输出
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 4549764507052008926
, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 5130440468361087955
physical_device_desc: "device: XLA_CPU device"
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 3136264601
locality {
bus_id: 1
links {
}
}
incarnation: 8742529146709444949
physical_device_desc: "device: 0, name: GeForce GTX 1050 Ti, pci bus id:
0000:01:00.0, compute capability: 6.1"
, name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 12774508348529661585
physical_device_desc: "device: XLA_GPU device"
]
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]