我正在按照我在Multi-Input Convolutional Neural Network for Flower Grading上阅读的体系结构,尝试开发多输入CNN。
我有一个csv文件,其中存储了每个数据项的值,并且对于每个项,我从不同角度捕获了4张图片。当我运行以下代码时,网络可以正确打印,但似乎永远不会进行训练,因为什么也没有发生,并且使用nvidia-smi的GPU使用率低于5%。
<!DOCTYPE HTML>
<html>
<head>
<script type = "text/javascript">
function WebSocketTest() {
if ("WebSocket" in window) {
alert("WebSocket is supported by your Browser!");
// Let us open a web socket
var exampleSocket = new WebSocket("ws://localhost:12345/echo");
alert("website opened")
exampleSocket.onopen = function(event) {
// Web Socket is connected, send data using send()
exampleSocket.send('{"Command":"Start","Status":"Check"}');
alert("Message is sent...");
};
exampleSocket.onmessage = function (event) {
var received_msg = event.data;
alert("Message is received...");
};
exampleSocket.onclose = function() {
// websocket is closed.
alert("Connection is closed...");
};
} else {
// The browser doesn't support WebSocket
alert("WebSocket NOT supported by your Browser!");
}
}
</script>
</head>
<body>
<div id = "sse">
<a href = "javascript:WebSocketTest()">Run WebSocket</a>
</div>
</body>
</html>
Expected:(on server side)
RcvdDAta : '{"Command":"Start","Status":"Check"}'
Acutal:
RcvdDAta :
b'GET /echo HTTP/1.1\r\nHost: localhost:12345\r\nConnection: Upgrade\r\nPragma: no-cache\r\nCache-Control: no-cache\r\nUpgrade: websocket\r\nOrigin: file://\r\nSec-WebSocket-Version: 13\r\nUser-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36\r\nAccept-Encoding: gzip, deflate, br\r\nAccept-Language: en-US,en;q=0.9\r\nSec-WebSocket-Key: s1qAa96xxm9IZqU21C0TQA==\r\nSec-WebSocket-Extensions: permessage-deflate; client_max_window_bits\r\n\r\n'
下面是nvidia-smi输出:
kilograms_trees = tf.data.experimental.CsvDataset(
filenames='dataset/agrumeto.csv',
record_defaults=[tf.float32],
field_delim=",",
header=True)
kilo_train = kilograms_trees.take(35)
kilo_test = kilograms_trees.skip(35)
def create_conv_layer(input):
x = tf.keras.layers.Conv2D(32, (7, 7), activation='relu')(input)
x = tf.keras.layers.MaxPooling2D((2, 2), (2,2))(x)
x = tf.keras.Model(inputs=input, outputs=x)
return x
inputA = tf.keras.Input(shape=(size,size,3))
inputB = tf.keras.Input(shape=(size,size,3))
inputC = tf.keras.Input(shape=(size,size,3))
inputD = tf.keras.Input(shape=(size,size,3))
x = create_conv_layer(inputA)
y = create_conv_layer(inputB)
w = create_conv_layer(inputC)
z = create_conv_layer(inputD)
# combine the output of the two branches
combined = tf.keras.layers.concatenate([x.output, y.output, w.output, z.output])
layer_1 = tf.keras.layers.Conv2D(16, (3,3), activation="relu")(combined)
layer_1 = tf.keras.layers.MaxPooling2D((2, 2))(layer_1)
layer_2 = tf.keras.layers.Conv2D(16, (3,3), activation="relu")(layer_1)
layer_2 = tf.keras.layers.MaxPooling2D((2, 2), (2,2))(layer_2)
layer_3 = tf.keras.layers.Conv2D(32, (3,3), activation="relu")(layer_2)
layer_3 = tf.keras.layers.MaxPooling2D((2, 2), (2,2))(layer_3)
layer_4 = tf.keras.layers.Conv2D(32, (3,3), activation="relu")(layer_3)
layer_4 = tf.keras.layers.MaxPooling2D((2, 2), (2,2))(layer_4)
flatten = tf.keras.layers.Flatten()(layer_4)
hidden1 = tf.keras.layers.Dense(10, activation='relu')(flatten)
output = tf.keras.layers.Dense(1, activation='relu')(hidden1)
model = tf.keras.Model(inputs=[x.input, y.input, w.input, z.input], outputs=output)
print(model.summary())
model.compile(optimizer='adam',
loss="mean_absolute_percentage_error")
print("[INFO] training model...")
model.fit([trainA, trainB, trainC, trainD], kilo_train, epochs=5, batch_size=4)
test_loss, test_acc = model.evaluate([testA, testB, testC, testD], kilo_test)
print(test_acc)
这是输出控制台的最后几行:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.40.04 Driver Version: 418.40.04 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1050 On | 00000000:01:00.0 Off | N/A |
| N/A 54C P0 N/A / N/A | 3830MiB / 4042MiB | 8% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 909 C ...ycharmProjects/agrumeto/venv/bin/python 3159MiB |
| 0 1729 G /usr/lib/xorg/Xorg 27MiB |
| 0 1870 G /usr/bin/gnome-shell 69MiB |
| 0 6290 G /usr/lib/xorg/Xorg 273MiB |
| 0 6420 G /usr/bin/gnome-shell 127MiB |
| 0 6834 G ...quest-channel-token=6261236721362009153 85MiB |
| 0 8806 G ...pycharm-professional/132/jre64/bin/java 2MiB |
| 0 12830 G ...-token=60E939FEF0A8E3D5C46B3D6911048536 31MiB |
| 0 27478 G ...-token=ECA4D3D9ADD8448674D34492E89E40E3 51MiB |
+-----------------------------------------------------------------------------+
答案 0 :(得分:1)
我忘记禁用Eager Execution,该功能在Tensorflow 2.0中默认启用。就是这个问题。