如何在单独的线程上使用经过训练的张量流NN评估数据

时间:2019-06-26 17:54:55

标签: python multithreading tensorflow keras python-multithreading

我一直在尝试制作一个NN,该NN将评估来自树莓派上流的2秒数据块。训练了NN,并控制了流,但是为了减少延迟,我们希望使用线程不断地对2秒块进行评估。

我一直在这里使用python(https://realpython.com/intro-to-python-threading/)示例的线程包。

这是模型:

drop_out_rate = 0.1
learning_rate = 0.001
number_of_epochs = 100
number_of_classes = 2
batch_size = 32
optimizer = optimizers.Adam(learning_rate, learning_rate / 100)
input_tensor = Input(shape=input_shape)
metrics = [auc, "accuracy"]

x = layers.Conv1D(16, 9, activation="relu", padding="same")(input_tensor)
x = layers.Conv1D(16, 9, activation="relu", padding="same")(x)
x = layers.MaxPool1D(16)(x)
x = layers.Dropout(rate=drop_out_rate)(x)

x = layers.Conv1D(32, 3, activation="relu", padding="same")(x)
x = layers.Conv1D(32, 3, activation="relu", padding="same")(x)
x = layers.MaxPool1D(4)(x)
x = layers.Dropout(rate=drop_out_rate)(x)

x = layers.Conv1D(32, 3, activation="relu", padding="same")(x)
x = layers.Conv1D(32, 3, activation="relu", padding="same")(x)
x = layers.MaxPool1D(4)(x)
x = layers.Dropout(rate=drop_out_rate)(x)

x = layers.Conv1D(256, 3, activation="relu", padding="same")(x)
x = layers.Conv1D(256, 3, activation="relu", padding="same")(x)
x = layers.GlobalMaxPool1D()(x)
x = layers.Dropout(rate=(drop_out_rate * 2))(x) # Increasing drop-out rate here to prevent overfitting

x = layers.Dense(64, activation="relu")(x)
x = layers.Dense(1028, activation="relu")(x)
output_tensor = layers.Dense(number_of_classes, activation="softmax")(x)

model = tf.keras.Model(input_tensor, output_tensor)
model.compile(optimizer=optimizer, loss=keras.losses.binary_crossentropy, metrics=metrics)
model.load_weights(os.getcwd()+"/gunshot_detection/raspberry_pi/models/gunshot_sound_model.h5")

这是线程功能:

def thread_function(microphone_data,name):
    tf.keras.backend.clear_session()
    logging.info("Thread %s: starting", name)
    reformed_microphone_data = librosa.util.normalize(microphone_data)
    reformed_microphone_data = reformed_microphone_data.reshape(-1, audio_rate, 1)
    print(reformed_microphone_data.shape, "this")    
    # Passes a given audio sample into the model for prediction
    probabilities = model.predict(reformed_microphone_data)
    print(probabilities)
    logging.info("Probability of %s: %s", name,str(probabilities))  
    logging.info("Thread %s: finishing", name)  

这就是它们的名称:

format = "%(asctime)s: %(message)s"
logging.basicConfig(format=format, level=logging.INFO,
                        datefmt="%H:%M:%S")


np_arrays = np.load(path)

threads = list()
for index in range(5):
    logging.info("Main    : create and start thread %d.", index)
    a = np_arrays[index].reshape(input_shape)
    #print(index,a.shape)
    x = threading.Thread(target=thread_function, args=(a,index,))        
    threads.append(x)
    x.start()

for index, thread in enumerate(threads):
    logging.info("Main    : before joining thread %d.", index)
    thread.join()
    logging.info("Main    : thread %d done", index)

这样做时,我们在模型的预测调用中遇到了这个错误:

Exception in thread Thread-101:
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 917, in _bootstrap_inner
    self.run()
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 865, in run
    self._target(*self._args, **self._kwargs)
  File "<ipython-input-138-e8e46f092078>", line 11, in thread_function
    probabilities = model.predict(reformed_microphone_data)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 1113, in predict
    self, x, batch_size=batch_size, verbose=verbose, steps=steps)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 195, in model_iteration
    f = _make_execution_function(model, mode)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 122, in _make_execution_function
    return model._make_execution_function(mode)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 1989, in _make_execution_function
    self._make_predict_function()
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 1979, in _make_predict_function
    **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 3201, in function
    return GraphExecutionFunction(inputs, outputs, updates=updates, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 2939, in __init__
    with ops.control_dependencies(self.outputs):
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 5028, in control_dependencies
    return get_default_graph().control_dependencies(control_inputs)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 4528, in control_dependencies
    c = self.as_graph_element(c)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3478, in as_graph_element
    return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3557, in _as_graph_element_locked
    raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("dense_17/Softmax:0", shape=(?, 2), dtype=float32) is not an element of this graph.

这是一个失败的原因吗?在对音频进行评估时,有什么方法可以同时不断地流音频?如果您有其他解决方案,请告诉我。

0 个答案:

没有答案