该怎么做?
:18+GCC_BUILD=false
:19+GLIBC_BUILD=false
:20+KERNEL_BUILD=false
:21+POSTGRESQL_BUILD=false
:22+PYTHON_BUILD=true
:23+REINSTALL_PACKAGES=false
:24+RUBY_BUILD=true
:25+SEGMENTED_DOWNLOAD=false
:26+UBUNTU_RSYNC=false
:29+echo false
:30+echo false
:31+echo false
:32+echo false
:33+echo true
:34+echo false
:35+echo true
:36+echo false
:37+echo false
:39+ARTIFACTORY=myregistry
:42+[[ false == \t\r\u\e ]]
:47+[[ false == \t\r\u\e ]]
:52+[[ false == \t\r\u\e ]]
:57+[[ false == \t\r\u\e ]]
:62+[[ true == \t\r\u\e ]]
:63+echo 'Running PYTHON_BUILD'
Running PYTHON_BUILD
running: docker run -i --name OTQ0YzJjZTU4YWQy myregistry/image
:64+run_container python-build
::6+date +%s
::6+sha256sum
::6+base64
::6+head -c 16
:6+CONTAINER_NAME=OTQ0YzJjZTU4YWQy
:7+echo 'running: docker run -i --name OTQ0YzJjZTU4YWQy myregistry/image'
:8+docker run -i --name OTQ0YzJjZTU4YWQy myregistry/image
Done with container python-build
:9+echo 'removing: container OTQ0YzJjZTU4YWQy from python-build'
:10+docker container rm OTQ0YzJjZTU4YWQy
:11+echo 'removing: image python-build'
:12+docker image rm myregistry/image
:13+echo -e '\nDone with container python-build\n\n\n'
:14+return 0
因此nn中的所有张量都相同,只有输入训练分支不同?
在纯张量流中,您可以使用
nn = get_networks()
A = nn(X_input)
B = nn(X_other_input)
C = A + B
model = ...
并仔细命名您的图层。
但是基本上您可以首先构建nn,因为您不能将非调用层传递给调用层!
例如:
tf.variable_scope('something', reuse=tf.AUTO_REUSE):
define stuff here
更新:
我已经通过创建未编译的模型作为子网来完成此任务。然后可以将该“模型”传递给其他网络创建功能。例如,如果您有一个要求解的泛函方程,则可以使用网络对函数进行近似,然后将网络传递给本身就是网络的函数。
答案 0 :(得分:0)
这取决于您要如何重用它,但是其想法是保存初始化后的图层,并在以后多次使用。
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.layers as layers
import numpy as np
layers = {}
def net1(input):
layers["l1"] = keras.layers.Dense(10)
layers["l2"] = keras.layers.Dense(10)
return layers["l1"](layers["l2"](keras.layers.Flatten()(input)))
def net2(input):
return layers["l1"](layers["l2"](keras.layers.Flatten()(input)))
input1 = keras.layers.Input((2, 2))
input2 = keras.layers.Input((2, 2))
model1 = keras.Model(inputs=input1, outputs=net1(input1))
model1.compile(loss=keras.losses.mean_squared_error, optimizer=keras.optimizers.Adam())
model2 = keras.Model(inputs=input2, outputs=net2(input2))
model2.compile(loss=keras.losses.mean_squared_error, optimizer=keras.optimizers.Adam())
x = np.random.randint(0, 100, (50, 2, 2))
m1 = model1.predict(x)
m2 = model2.predict(x)
print(x[0])
print(m1[0])
print(m2[0])
输出相同:
[ 10.114908 -13.074531 -8.671929 -59.03201 55.389366 1.3610549
-38.051434 8.355987 7.5310936 -27.717983 ]
[ 10.114908 -13.074531 -8.671929 -59.03201 55.389366 1.3610549
-38.051434 8.355987 7.5310936 -27.717983 ]