Keras BatchNormalization未初始化的值

时间:2017-01-24 01:41:19

标签: python tensorflow keras

我正在尝试将批量规范添加到Keras的vgg样式模型中。当我添加批量标准层时,我收到错误:

FailedPreconditionError: Attempting to use uninitialized value batchnormalization_1_running_mean/biased 

如果没有批处理层,脚本运行时没有错误,只有当我添加batchNormalization图层时才会抛出错误。

model = Sequential()
model.add(ZeroPadding2D((1, 1), input_shape=(1, conf['image_shape'][0], conf['image_shape'][1]), dim_ordering=conf['dim_ordering']))
model.add(Convolution2D(conf['level_1_filters'], 3, 3, dim_ordering=conf['dim_ordering']))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(ZeroPadding2D((1, 1), dim_ordering=conf['dim_ordering']))
model.add(Convolution2D(conf['level_1_filters'], 3, 3, dim_ordering=conf['dim_ordering']))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D((2, 2), strides=(2, 2), dim_ordering=conf['dim_ordering']))

model.add(ZeroPadding2D((1, 1), dim_ordering=conf['dim_ordering']))
model.add(Convolution2D(conf['level_2_filters'], 3, 3, dim_ordering=conf['dim_ordering']))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(ZeroPadding2D((1, 1), dim_ordering=conf['dim_ordering']))
model.add(Convolution2D(conf['level_2_filters'], 3, 3, dim_ordering=conf['dim_ordering']))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D((2, 2), strides=(2, 2), dim_ordering=conf['dim_ordering']))

model.add(Flatten())
model.add(Dense(conf['dense_layer_size']))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(conf['dropout_value']))
model.add(Dense(conf['dense_layer_size']))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(conf['dropout_value']))

model.add(Dense(2, activation='softmax'))

# sgd = SGD(lr=conf['learning_rate'], decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

这是在Keras中使用批量规范的正确语法吗?我按照thread中的示例进行了操作。

Using TensorFlow backend.
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally
Train patients: 699
Valid patients: 698
Create and compile model...
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.7335
pciBusID 0000:03:00.0
Total memory: 7.92GiB
Free memory: 7.07GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:03:00.0)
Number of train files: 123111
Number of valid files: 125469
Fit model...
Samples train: 5000, Samples valid: 5000
Epoch 1/40
W tensorflow/core/framework/op_kernel.cc:975] Failed precondition: Attempting to use uninitialized value batchnormalization_1_running_mean/biased
     [[Node: batchnormalization_1_running_mean/biased/read = Identity[T=DT_FLOAT, _class=["loc:@batchnormalization_1_running_mean"], _device="/job:localhost/replica:0/task:0/gpu:0"](batchnormalization_1_running_mean/biased)]]
W tensorflow/core/framework/op_kernel.cc:975] Failed precondition: Attempting to use uninitialized value batchnormalization_1_running_mean/biased
     [[Node: batchnormalization_1_running_mean/biased/read = Identity[T=DT_FLOAT, _class=["loc:@batchnormalization_1_running_mean"], _device="/job:localhost/replica:0/task:0/gpu:0"](batchnormalization_1_running_mean/biased)]]
W tensorflow/core/framework/op_kernel.cc:975] Failed precondition: Attempting to use uninitialized value batchnormalization_1_running_mean/biased
     [[Node: batchnormalization_1_running_mean/biased/read = Identity[T=DT_FLOAT, _class=["loc:@batchnormalization_1_running_mean"], _device="/job:localhost/replica:0/task:0/gpu:0"](batchnormalization_1_running_mean/biased)]]
W tensorflow/core/framework/op_kernel.cc:975] Failed precondition: Attempting to use uninitialized value batchnormalization_1_running_mean/biased
     [[Node: batchnormalization_1_running_mean/biased/read = Identity[T=DT_FLOAT, _class=["loc:@batchnormalization_1_running_mean"], _device="/job:localhost/replica:0/task:0/gpu:0"](batchnormalization_1_running_mean/biased)]]
W tensorflow/core/framework/op_kernel.cc:975] Failed precondition: Attempting to use uninitialized value batchnormalization_1_running_mean/biased
     [[Node: batchnormalization_1_running_mean/biased/read = Identity[T=DT_FLOAT, _class=["loc:@batchnormalization_1_running_mean"], _device="/job:localhost/replica:0/task:0/gpu:0"](batchnormalization_1_running_mean/biased)]]
W tensorflow/core/framework/op_kernel.cc:975] Failed precondition: Attempting to use uninitialized value batchnormalization_1_running_mean/biased
     [[Node: batchnormalization_1_running_mean/biased/read = Identity[T=DT_FLOAT, _class=["loc:@batchnormalization_1_running_mean"], _device="/job:localhost/replica:0/task:0/gpu:0"](batchnormalization_1_running_mean/biased)]]
Traceback (most recent call last):
  File "keras-v2.py", line 197, in <module>
    model = create_single_model()
  File "keras-v2.py", line 173, in create_single_model
    callbacks=callbacks)
  File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 882, in fit_generator
    pickle_safe=pickle_safe)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1461, in fit_generator
    class_weight=class_weight)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1239, in train_on_batch
    outputs = self.train_function(ins)
  File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 1040, in __call__
    updated = session.run(self.outputs + [self.updates_op], feed_dict=feed_dict)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 766, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 964, in _run
    feed_dict_string, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1014, in _do_run
    target_list, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1034, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value batchnormalization_1_running_mean/biased
     [[Node: batchnormalization_1_running_mean/biased/read = Identity[T=DT_FLOAT, _class=["loc:@batchnormalization_1_running_mean"], _device="/job:localhost/replica:0/task:0/gpu:0"](batchnormalization_1_running_mean/biased)]]
     [[Node: Mean_3/_49 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_2152_Mean_3", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Caused by op u'batchnormalization_1_running_mean/biased/read', defined at:
  File "keras-v2.py", line 197, in <module>
    model = create_single_model()
  File "keras-v2.py", line 145, in create_single_model
    model = get_custom_CNN()
  File "keras-v2.py", line 111, in get_custom_CNN
    model.add(BatchNormalization(axis=-1))
  File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 312, in add
    output_tensor = layer(self.outputs[0])
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 514, in __call__
    self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 572, in add_inbound_node
    Node.create_node(self, inbound_layers, node_indices, tensor_indices)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 149, in create_node
    output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
  File "/usr/local/lib/python2.7/dist-packages/keras/layers/normalization.py", line 140, in call
    self.updates = [K.moving_average_update(self.running_mean, mean, self.momentum),
  File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 329, in moving_average_update
    variable, value, momentum)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/moving_averages.py", line 70, in assign_moving_average
    update_delta = _zero_debias(variable, value, decay)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/moving_averages.py", line 177, in _zero_debias
    trainable=False)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 1024, in get_variable
    custom_getter=custom_getter)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 850, in get_variable
    custom_getter=custom_getter)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 346, in get_variable
    validate_shape=validate_shape)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 331, in _true_getter
    caching_device=caching_device, validate_shape=validate_shape)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 677, in _get_single_variable
    expected_shape=shape)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 224, in __init__
    expected_shape=expected_shape)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 370, in _init_from_args
    self._snapshot = array_ops.identity(self._variable, name="read")
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 1424, in identity
    result = _op_def_lib.apply_op("Identity", input=input, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2240, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1128, in __init__
    self._traceback = _extract_stack()

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value batchnormalization_1_running_mean/biased
     [[Node: batchnormalization_1_running_mean/biased/read = Identity[T=DT_FLOAT, _class=["loc:@batchnormalization_1_running_mean"], _device="/job:localhost/replica:0/task:0/gpu:0"](batchnormalization_1_running_mean/biased)]]
     [[Node: Mean_3/_49 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_2152_Mean_3", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 754, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 433, in data_generator_task
    generator_output = next(generator)
  File "keras-v2.py", line 71, in batch_generator_train
    image = load_and_normalize_dicom(f, conf['image_shape'][0], conf['image_shape'][1])
  File "keras-v2.py", line 58, in load_and_normalize_dicom
    dicom_img = cv2.resize(dicom_img, (x, y), interpolation=cv2.INTER_CUBIC)
AttributeError: 'NoneType' object has no attribute 'resize'

3 个答案:

答案 0 :(得分:4)

在佩戴前尝试keras.backend.get_session().run(tf.global_variables_initializer())。存在问题here

答案 1 :(得分:0)

尝试keras.backend.get_session().run(tf.local_variables_initializer())。对我来说,全局初始化器不起作用,而本地初始化器起作用。尽管对于最新的TF / Keras版本来说这可能不是问题。

答案 2 :(得分:0)

如果 您输入图像的数据格式为“ channels_last”,input_shape为Image_Height x Image_Width x Image_Channel 然后 尝试使用BatchNormalization(axis = 3)