使用TensorFlow中的神经网络输出优化odeint中的参数

时间:2018-08-26 05:32:47

标签: python tensorflow

我想使用张量流优化ODE的系数。

def odeModel(state, t):
    x, y, z = tf.unstack(state)
    dx = y
    # Here I want to define dy and dz as follows:
    # [dy, dz] = tf.nn.relu(tf.matmul([y, z], W) + b)
    return tf.stack([dx, dy, dz])

基本上,我的目标是将[dy,dz]定义为[y,z]的映射,该映射取决于适当大小的TensorFlow变量'W'和'b'。然后,我想找到最小化依赖于从“ state0”开始的轨迹的损失函数的“ W”和“ b”。有可能吗?

我的目标是按照以下几行代码编写其余代码。

t = np.linspace(0, 5, 100)
state0 = #Appropriate starting point, e.g., tf.constant([0, 1, 3], dtype=tf.float64)
states = tf.contrib.integrate.odeint(odeModel, state0, t)

loss = tf.reduce_mean(tf.pow(states[:, 2], 2))
optimizer = tf.train.AdagradOptimizer(0.05).minimize(loss)

当然,我需要创建一个会话并运行优化器。为了简洁起见,省略了细节。我想知道是否有一种方法可以实现我的目标。

1 个答案:

答案 0 :(得分:0)

这可以完全按照您的描述方式进行:

import tensorflow as tf
import numpy as np


RS = np.random.RandomState(42)

# Defining model parameters as TF variables
W1 = tf.Variable(RS.randn(2, 1))
b1 = tf.Variable(RS.randn(1,))

W2 = tf.Variable(RS.randn(2, 1))
b2 = tf.Variable(RS.randn(1,))

def odeModel(state, t):
    x, y, z = tf.unstack(state)
    dx = y

    # Model definition
    dy = tf.nn.relu(tf.matmul(tf.expand_dims([y, z], -1), W1, transpose_a=True) + b1)
    dz = tf.nn.relu(tf.matmul(tf.expand_dims([y, z], -1), W2, transpose_a=True) + b2)
    return tf.stack([dx, tf.squeeze(dy), tf.squeeze(dz)])

t = np.linspace(0, 5, 100)
state0 = tf.constant([0, 1, 3], dtype=tf.float64)
states, info = tf.contrib.integrate.odeint(odeModel, state0, t, full_output=True)

loss = tf.reduce_mean(tf.pow(states[:, 2], 2))
optimizer = tf.train.AdagradOptimizer(0.05).minimize(loss)

# ----
sess = tf.Session()
sess.run(tf.global_variables_initializer())

# Value before optimizing
sess.run(W1)
# array  [[ 0.49671415],
#         [-0.1382643 ]]

# Optimize for 10 steps.
for i in range(10): sess.run(optimizer)

# Value after optimization
sess.run(W1)
# array([[ 0.38043613],
#        [-0.26166077]])