TensorFlow2的分布式学习无法正常工作

时间:2019-06-05 08:45:25

标签: tensorflow python-3.6 distributed-tensorflow

我正在尝试使用Tensorflow版本2.0.0a(CPU版本)在VS-Code中工作的分布式TF。

我正在使用Windows和Linux系统(两台不同的计算机),并且两者都运行良好。

对于已发行的TF,我遵循以下教程 https://www.tensorflow.org/alpha/guide/distribute_strategy

我已经尝试了其他端口并关闭了防火墙。我还尝试将主系统从Windows切换到Linux,但现在我认为这可能是代码问题,或者是标为实验性的TF版本。

from __future__ import absolute_import, division, print_function, unicode_literals

import tensorflow_datasets as tfds    
import tensorflow as tf    
import json    
import os

BUFFER_SIZE = 10000    
BATCH_SIZE = 64

def scale(image, label):

   image = tf.cast(image, tf.float32)
   image /= 255
   return image, label


datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)

train_datasets_unbatched = datasets['train'].map(scale).shuffle(BUFFER_SIZE)

train_datasets = train_datasets_unbatched.batch(BATCH_SIZE)

def build_and_compile_cnn_model():

  model = tf.keras.Sequential([    
      tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),    
      tf.keras.layers.MaxPooling2D(),    
      tf.keras.layers.Flatten(),    
      tf.keras.layers.Dense(64, activation='relu'),    
      tf.keras.layers.Dense(10, activation='softmax')    
  ])

  model.compile(    
      loss=tf.keras.losses.sparse_categorical_crossentropy,    
      optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),    
      metrics=['accuracy'])

  return model


#multiworker conf:

os.environ['TF_CONFIG'] = json.dumps({    
    'cluster': {    
        'worker': ["192.168.0.12:2468", "192.168.0.13:1357"]    
    },    
    'task': {'type': 'worker', 'index': 0}    
})

strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
NUM_WORKERS = 2

GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS

#--------------------------------------------------------------------

#In the following line the error occurs

train_datasets = train_datasets_unbatched.batch(GLOBAL_BATCH_SIZE)

#--------------------------------------------------------------------


with strategy.scope():    
    multi_worker_model = build_and_compile_cnn_model()  
    multi_worker_model.fit(x=train_datasets, epochs=3)

我希望工作人员可以开始学习过程,但是却收到错误消息:

“ F tensorflow / core / framework / device_base.cc:33]设备未实现name()”

1 个答案:

答案 0 :(得分:0)

据我所知,每个工人都应该有一个唯一的任务索引,例如:

在第一台计算机上,您应该具有:

os.environ['TF_CONFIG'] = json.dumps({    
    'cluster': {    
        'worker': ["192.168.0.12:2468", "192.168.0.13:1357"]    
    },    
    'task': {'type': 'worker', 'index': 0}    
})

第二个:

os.environ['TF_CONFIG'] = json.dumps({    
    'cluster': {    
        'worker': ["192.168.0.12:2468", "192.168.0.13:1357"]    
    },    
    'task': {'type': 'worker', 'index': 1}    
})