从CSV导入图层权重到Keras模型后出错

时间:2018-12-27 20:19:16

标签: python tensorflow machine-learning keras

使用来自CSV的数据设置权重时没有错误,但是使用Keras加载的权重评估模型时出现问题。

我知道我可以将权重保存为.h5文件,但仅限于使用CSV文件。

我尝试致电:
     keras.backend.clear_session()
     keras.backend.set_learning_phase(0)

设置权重之后但评估模型之前。

清除会话的效果似乎更好一些,错误消息此后发生了改变。

模型架构

import tensorflow as tf  
from tensorflow import keras  
import numpy as np  
from keras import models  
from keras import layers  
from keras import regularizers  

model = models.Sequential()  
model.add(layers.Embedding(512, 64))  
model.add(layers.Conv1D(64, 5, activation='elu'))  

model.add(layers.MaxPooling1D(3))  
model.add(layers.Conv1D(32, 3, activation='elu'))  
model.add(layers.Bidirectional(layers.LSTM(32, kernel_regularizer=regularizers.l2(0.001)), input_shape=(None, 32)))  
model.add(layers.Dense(3, activation='softmax'))  

model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])  

model.summary()  

装载重量

导入CSV文件并将数据重塑为numpy数组后,其形状与训练模型后的形状(在其他笔记本中)
L1 :numpy数组形状(1、512、64)
L2 :numpy数组形状((5,64,64),(64,))
L3 :numpy数组形状(0,)
L4 :numpy数组形状((3,64,32),(32,))
L5 :numpy数组形状((32,128),(32,128),(128,),(32,128),(32,128),(128,))
L6 :导入CSV文件的numpy数组。形状((64,3),(3,))

W = [L1, L2, L3, L4, L5, L6]  

x = 0  
for layer in model.layers:  
    layer.set_weights(W[x])  
    x += 1  

运行模型

到目前为止,运行代码时没有错误。
现在,我上传了用于训练模型的相同数据,以测试权重是否正确加载:
labels2 :矢量化的标签(一种热编码)形状(85261,3)
特征:形状为(85261,80)的numpy数组

keras.backend.clear_session()  
keras.backend.set_learning_phase(0)  

scores = model.evaluate(features, labels2, batch_size=512, verbose=1, steps=None)  

当训练原始模型时,我希望得到相同的结果。数据(功能,标签)相同。但是不能运行评估或预测命令。

我收到以下错误:

InvalidArgumentErrorTraceback (most recent call last)
<ipython-input-6-bb913425426a> in <module>()
      1 
----> 2 scores = model.evaluate(features, labels2, batch_size=512, verbose=1, steps=None)
      3 print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

/usr/local/lib/python2.7/dist-packages/keras/engine/training.pyc in evaluate(self, x, y, batch_size, verbose, sample_weight, steps)
   1111                                          batch_size=batch_size,
   1112                                          verbose=verbose,
-> 1113                                          steps=steps)
   1114 
   1115     def predict(self, x,

/usr/local/lib/python2.7/dist-packages/keras/engine/training_arrays.pyc in test_loop(model, f, ins, batch_size, verbose, steps)
    390                 ins_batch[i] = ins_batch[i].toarray()
    391 
--> 392             batch_outs = f(ins_batch)
    393             if isinstance(batch_outs, list):
    394                 if batch_index == 0:

/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.pyc in __call__(self, inputs)
   2713                 return self._legacy_call(inputs)
   2714 
-> 2715             return self._call(inputs)
   2716         else:
   2717             if py_any(is_tensor(x) for x in inputs):

/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.pyc in _call(self, inputs)
   2673             fetched = self._callable_fn(*array_vals, run_metadata=self.run_metadata)
   2674         else:
-> 2675             fetched = self._callable_fn(*array_vals)
   2676         return fetched[:len(self.outputs)]
   2677 

/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in __call__(self, *args, **kwargs)
   1437           ret = tf_session.TF_SessionRunCallable(
   1438               self._session._session, self._handle, args, status,
-> 1439               run_metadata_ptr)
   1440         if run_metadata:
   1441           proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/errors_impl.pyc in __exit__(self, type_arg, value_arg, traceback_arg)
    526             None, None,
    527             compat.as_text(c_api.TF_Message(self.status.status)),
--> 528             c_api.TF_GetCode(self.status.status))
    529     # Delete the underlying status object from memory otherwise it stays alive
    530     # as there is a reference to status from this from the traceback due to

InvalidArgumentError: indices[192,4] = -1 is not in [0, 512)
     [[{{node embedding_1/embedding_lookup}} = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_1/embeddings/read, embedding_1/Cast, embedding_1/embedding_lookup/axis)]]  

如果在评估模型之前添加 keras.backend.clear_session(),则会收到以下错误消息:

ValueError: Tensor Tensor("loss/add_1:0", shape=(), dtype=float32) is not an element of this graph.  

0 个答案:

没有答案