使用Docker安装Keras

时间:2019-11-11 03:22:53

标签: docker keras

ResourceExhaustedErrorTraceback (most recent call last)
<ipython-input-8-cb1025b61acf> in <module>()
----> 1 history = model.fit_generator(train_generator, steps_per_epoch=100, epochs=30,validation_data=validation_generator, validation_steps=50)

/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.pyc in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name +
     90                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

/usr/local/lib/python2.7/dist-packages/keras/models.pyc in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   1251                                         use_multiprocessing=use_multiprocessing,
   1252                                         shuffle=shuffle,
-> 1253                                         initial_epoch=initial_epoch)
   1254 
   1255     @interfaces.legacy_generator_methods_support

/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.pyc in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name +
     90                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

/usr/local/lib/python2.7/dist-packages/keras/engine/training.pyc in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   2242                     outs = self.train_on_batch(x, y,
   2243                                                sample_weight=sample_weight,
-> 2244                                                class_weight=class_weight)
   2245 
   2246                     if not isinstance(outs, list):

/usr/local/lib/python2.7/dist-packages/keras/engine/training.pyc in train_on_batch(self, x, y, sample_weight, class_weight)
   1888             ins = x + y + sample_weights
   1889         self._make_train_function()
-> 1890         outputs = self.train_function(ins)
   1891         if len(outputs) == 1:
   1892             return outputs[0]

/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.pyc in __call__(self, inputs)
   2473         session = get_session()
   2474         updated = session.run(fetches=fetches, feed_dict=feed_dict,
-> 2475                               **self.session_kwargs)
   2476         return updated[:len(self.outputs)]
   2477 

/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
    893     try:
    894       result = self._run(None, fetches, feed_dict, options_ptr,
--> 895                          run_metadata_ptr)
    896       if run_metadata:
    897         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1126     if final_fetches or final_targets or (handle and feed_dict_tensor):
   1127       results = self._do_run(handle, final_targets, final_fetches,
-> 1128                              feed_dict_tensor, options, run_metadata)
   1129     else:
   1130       results = []

/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1342     if handle is None:
   1343       return self._do_call(_run_fn, self._session, feeds, fetches, targets,
-> 1344                            options, run_metadata)
   1345     else:
   1346       return self._do_call(_prun_fn, self._session, handle, feeds, fetches)

/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_call(self, fn, *args)
   1361         except KeyError:
   1362           pass
-> 1363       raise type(e)(node_def, op, message)
   1364 
   1365   def _extend_graph(self):

ResourceExhaustedError: OOM when allocating tensor with shape[6272,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[Node: training/RMSprop/mul_24 = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](RMSprop/rho/read, training/RMSprop/Variable_8/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

	 [[Node: metrics/acc/Mean_1/_109 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_774_metrics/acc/Mean_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Caused by op u'training/RMSprop/mul_24', defined at:
  File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py", line 16, in <module>
    app.launch_new_instance()
  File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 658, in launch_instance
    app.start()
  File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelapp.py", line 486, in start
    self.io_loop.start()
  File "/usr/local/lib/python2.7/dist-packages/tornado/ioloop.py", line 888, in start
    handler_func(fd_obj, events)
  File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
    return fn(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 450, in _handle_events
    self._handle_recv()
  File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 480, in _handle_recv
    self._run_callback(callback, msg)
  File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 432, in _run_callback
    callback(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
    return fn(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
    return self.dispatch_shell(stream, msg)
  File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
    handler(stream, idents, msg)
  File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
    user_expressions, allow_stdin)
  File "/usr/local/lib/python2.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
    res = shell.run_cell(code, store_history=store_history, silent=silent)
  File "/usr/local/lib/python2.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
    interactivity=interactivity, compiler=compiler, result=result)
  File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
    if self.run_code(code, result):
  File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-8-cb1025b61acf>", line 1, in <module>
    history = model.fit_generator(train_generator, steps_per_epoch=100, epochs=30,validation_data=validation_generator, validation_steps=50)
  File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 1253, in fit_generator
    initial_epoch=initial_epoch)
  File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 2088, in fit_generator
    self._make_train_function()
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 990, in _make_train_function
    loss=self.total_loss)
  File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keras/optimizers.py", line 251, in get_updates
    new_a = self.rho * a + (1. - self.rho) * K.square(g)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 775, in _run_op
    return getattr(ops.Tensor, operator)(a._AsTensor(), *args)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 907, in binary_op_wrapper
    return func(x, y, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 1131, in _mul_dispatch
    return gen_math_ops._mul(x, y, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 2798, in _mul
    "Mul", x=x, y=y, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3160, in create_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1625, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[6272,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[Node: training/RMSprop/mul_24 = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](RMSprop/rho/read, training/RMSprop/Variable_8/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

	 [[Node: metrics/acc/Mean_1/_109 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_774_metrics/acc/Mean_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

我正在尝试使用Docker安装Keras,以使用GPU进行深度学习。我在Keras github(https://github.com/keras-team/keras/tree/master/docker)上,我不知道该怎么做。我是Docker的新手,我能够获取的Docker映像全部都使用了'docker pull'命令。但是我看不到Docker pull命令来获取keras。我不理解所提供的“制作”说明。

我一直在想让Keras与我的linux计算机一起运行。最初,我尝试将CUDA,TF和所有文件直接安装到我的计算机上,但是在放弃与所有软件的版本兼容性方面遇到了很多问题,因此我一直在尝试使用docker对其进行简化,但这并没有也很容易。我尝试了多个docker映像,包括ermaker / keras-jupyter和gw000 / keras-full,但都无法使它们工作。

使用gw000 / keras-full,我尝试使用Keras深度学习书中的cat分类器运行简单的神经网络,但收到一个错误消息,指出内存已被完全填满。我不知道为什么会收到该错误,这是一个简单的分类器,比我可以在旧笔记本电脑上运行的原因还要多,由于某种原因,它与我的RTX 2080TI一起爆炸了。

对于通过docker获得keras的工作版本的任何帮助,将不胜感激。

这是使用gw000 / keras-full的代码。我用它来启动带有GPU的Docker:

docker run -d $(ls /dev/nvidia* | xargs -I{} echo '--device={}') $(ls /usr/lib/*-linux-gnu/{libcuda,libnvidia}* | xargs -I{} echo '-v {}:{}:ro') -p 8888:8888 -v /home/name/Desktop:/srv gw000/keras-full

当我尝试运行模型训练时,这发生在第一个时期。我在错误中看到它正在尝试运行python 2,这可能是一个问题,因为它可能是用python 3编写的,但我不知道这是否是问题,以及如何更改为使用python 3。如前所述,该代码完全来自Keras深度学习书,并且在我的旧笔记本电脑上可以完美地工作。我一辈子都无法弄清楚为什么我的PC上无法运行任何东西。

Epoch 1/30

*SEE THE ATTACHED CODE SNIP FOR THE ERROR I GET*

1 个答案:

答案 0 :(得分:0)

  

但是我看不到Docker pull命令来获取keras。我不是   了解提供的“制作”说明

make指令运行构建docker映像所需的所有bash命令。该映像是根据Dockerfile中提到的配置生成的。构建映像后,它将存储在本地计算机中。因此,您无需将其拉出。

您可以选择使用Keras的方式。例如,如果要以交互方式运行Keras,请运行make bash。这将构建docker映像并实例化该映像的容器。您可以通过新的命令行提示符使用Keras。但是,如果还要使用GPU(假设您已成功安装NVIDIA驱动程序),请运行make bash GPU=0。这将为您运行nvidia-docker命令。这样产生的容器将具有gpu支持。

  

我尝试了多个docker映像,包括ermaker / keras-jupyter和   gw000 / keras-full,也无法使它们工作

映像ermaker/keras-jupytergw000/keras-full是预构建的docker映像。但是,ermaker/keras-jupyter并未随附gera版本的keras。看来您的程序占用大量内存。没有gpu支持,它将显示内存错误。如果您已正确安装驱动程序,则一个很好的选择是使用python虚拟环境。

但是,如果即使在运行具有gpu支持的Keras docker容器后仍遇到内存错误,请尝试减少培训的批处理大小。