我正在尝试通过顺序Keras神经网络运行小波重建数据集。为了从训练中获得更好的结果,我试图构建一个仅关注波形某些指标的自定义损耗函数。我打算创建一个神经网络,以内插削波后的波形,因此我只希望该神经网络通过将削波后的波形段与实际输出进行比较来计算损耗。
我已经尝试为我的自定义损失函数创建包装器,以便可以传递其他输入参数。然后,我使用此输入参数来查找已裁剪数据点的索引,并尝试从y_pred和y_true中收集这些索引。
这是实例化和训练模型的地方:
x_train, x_test, y_train, y_test = train_test_split(X, Y, train_size=0.7)
_dim = len(x_train[0])
# define the keras model
model = Sequential()
# tanh activation allows for vals between -1 and 1 unlike relu
model.add(Dense(_dim*2, input_dim=_dim, activation=_activation))
model.add(Dense(_dim*2, activation=_activation))
model.add(Dense(_dim, activation=_activation))
# model.compile(loss=_loss, optimizer=_optimizer)
model.compile(loss=_loss, optimizer=_optimizer, metrics=[custom_loss_wrapper_2(x_train)])
print(model.summary())
# The patience parameter is the amount of epochs to check for improvement
early_stop = EarlyStopping(monitor='val_loss', patience=5)
# fit the model
history = model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=150, batch_size=15, callbacks=[early_stop])
这是我的自定义损失函数所在的地方:
def custom_loss_wrapper_2(inputs):
# source: https://stackoverflow.com/questions/55445712/custom-loss-function-in-keras-based-on-the-input-data
# 2nd source: http://stackoverflow.com/questions.55597335/how-to-use-tf-gather-in-batch
def reindex(tensor_tuple):
# unpack tensor tuple
y_true = tensor_tuple[0]
y_pred = tensor_tuple[1]
t_inputs = K.cast(tensor_tuple[2], dtype='int64')
t_max_indices = K.tf.where(K.tf.equal(t_inputs, K.max(t_inputs)))
# gather the values from y_true and y_pred
y_true_gathered = K.gather(y_true, t_max_indices)
y_pred_gathered = K.gather(y_pred, t_max_indices)
print(K.mean(K.square(y_true_gathered - y_pred_gathered)))
return K.mean(K.square(y_true_gathered - y_pred_gathered))
def custom_loss(y_true, y_pred):
# Step 1: "tensorize" the previous list
t_inputs = K.variable(inputs)
# Step 2: Stack tensors
tensor_tuple = K.stack([y_true, y_pred, t_inputs], axis=1)
vals = K.map_fn(reindex, tensor_tuple, dtype='float32')
print('vals: ', vals)
return K.mean(vals)
return custom_loss
我尝试使用自定义损失函数时收到以下错误消息:
Using TensorFlow backend.
WARNING: Logging before flag parsing goes to stderr.
W0722 15:28:20.239395 17232 deprecation_wrapper.py:119] From C:\Users\Madison\PycharmProjects\MSTS\venv\lib\site-packages\keras\backend\tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
W0722 15:28:20.252325 17232 deprecation_wrapper.py:119] From C:\Users\Madison\PycharmProjects\MSTS\venv\lib\site-packages\keras\backend\tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
W0722 15:28:20.253353 17232 deprecation_wrapper.py:119] From C:\Users\Madison\PycharmProjects\MSTS\venv\lib\site-packages\keras\backend\tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
W0722 15:28:20.280281 17232 deprecation_wrapper.py:119] From C:\Users\Madison\PycharmProjects\MSTS\venv\lib\site-packages\keras\optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
W0722 15:28:20.293246 17232 deprecation_wrapper.py:119] From C:\Users\Madison\PycharmProjects\MSTS\venv\lib\site-packages\keras\backend\tensorflow_backend.py:1521: The name tf.log is deprecated. Please use tf.math.log instead.
W0722 15:28:20.366046 17232 deprecation.py:323] From C:\Users\Madison\PycharmProjects\MSTS\Seismic_Analysis\ML\custom_loss.py:83: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Tensor("metrics/custom_loss/map/while/Mean:0", shape=(), dtype=float32)
vals: Tensor("metrics/custom_loss/map/TensorArrayStack/TensorArrayGatherV3:0", shape=(1228,), dtype=float32)
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 1002) 503004
_________________________________________________________________
dense_2 (Dense) (None, 1002) 1005006
_________________________________________________________________
dense_3 (Dense) (None, 501) 502503
=================================================================
Total params: 2,010,513
Trainable params: 2,010,513
Non-trainable params: 0
_________________________________________________________________
None
W0722 15:28:20.467779 17232 deprecation_wrapper.py:119] From C:\Users\Madison\PycharmProjects\MSTS\venv\lib\site-packages\keras\backend\tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.
Train on 1228 samples, validate on 527 samples
Epoch 1/150
2019-07-22 15:28:20.606792: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Traceback (most recent call last):
File "C:/Users/Madison/PycharmProjects/MSTS/Seismic_Analysis/ML/clipping_ml.py", line 172, in <module>
main()
File "C:/Users/Madison/PycharmProjects/MSTS/Seismic_Analysis/ML/clipping_ml.py", line 168, in main
run_general()
File "C:/Users/Madison/PycharmProjects/MSTS/Seismic_Analysis/ML/clipping_ml.py", line 156, in run_general
_loss=_loss, _activation=_activation, _optimizer=_optimizer)
File "C:/Users/Madison/PycharmProjects/MSTS/Seismic_Analysis/ML/clipping_ml.py", line 59, in build_clipping_model
history = model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=150, batch_size=15, callbacks=[early_stop])
File "C:\Users\Madison\PycharmProjects\MSTS\venv\lib\site-packages\keras\engine\training.py", line 1039, in fit
validation_steps=validation_steps)
File "C:\Users\Madison\PycharmProjects\MSTS\venv\lib\site-packages\keras\engine\training_arrays.py", line 199, in fit_loop
outs = f(ins_batch)
File "C:\Users\Madison\PycharmProjects\MSTS\venv\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in __call__
return self._call(inputs)
File "C:\Users\Madison\PycharmProjects\MSTS\venv\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _call
fetched = self._callable_fn(*array_vals)
File "C:\Users\Madison\PycharmProjects\MSTS\venv\lib\site-packages\tensorflow\python\client\session.py", line 1458, in __call__
run_metadata_ptr)
tensorflow.python.framework.errors_impl.**InvalidArgumentError: Shapes of all inputs must match**: values[0].shape = [15,501] != values[2].shape = [1228,501]
[[{{node metrics/custom_loss/stack}}]]
答案 0 :(得分:0)
您可以分享一个可运行但失败的问题示例吗?即使只有几个数据点。目前看来您的数据形状不一致。例如。一个小波比另一个小波长。批次必须是同质的。一种检查方法是:
print(set(inp.shape for inp in inputs))
如果该集合包含多个元素,则可能需要扩充数据。
答案 1 :(得分:0)
经过一番思考后,我找到了原始问题的答案。我想我会在这里发布它,以防将来对某人有所帮助。我必须与提供损失函数包装器的输入参数有关的问题。当我只应该传递批输入时,我传递的是整个输入数组。这是在函数调用期间通过发送model.inputs完成的。因此,新的编译行应如下所示:
model.compile(loss=_loss, optimizer=_optimizer, metrics=[custom_loss_wrapper_2(model.input)])