如何将Tensorflow 1.x类迁移到Tensorflow 2.1.x

时间:2020-04-23 18:06:32

标签: tensorflow keras deep-learning tensorflow2.0 tf.keras

对于我的学习项目,由于我在RNN方面是新手,所以我在python中使用了Tensorflow 1.x,并且似乎已经弃用了一些东西,实际目的是将LSTM更改为CuDNNLSTM,但首先我需要迁移到Tensorflow 2.1.x

class Model:
    def __init__(
        self,
        learning_rate,
        num_layers,
        size,
        size_layer,
        output_size,
        forget_bias = 0.1,
    ):
        def lstm_cell(size_layer):
            return tf.compat.v1.nn.rnn_cell.LSTMCell(size_layer, state_is_tuple = False)

        rnn_cells = tf.compat.v1.nn.rnn_cell.MultiRNNCell(
            [lstm_cell(size_layer) for _ in range(num_layers)],
            state_is_tuple = False,
        )
        self.X = tf.compat.v1.placeholder(tf.float32, (None, None, size))
        self.Y = tf.compat.v1.placeholder(tf.float32, (None, output_size))
        drop = tf.compat.v1.nn.rnn_cell.DropoutWrapper(
            rnn_cells, output_keep_prob = forget_bias
        )
        self.hidden_layer = tf.compat.v1.placeholder(
            tf.float32, (None, num_layers * 2 * size_layer)
        )
        self.outputs, self.last_state = tf.compat.v1.nn.dynamic_rnn(
            drop, self.X, initial_state = self.hidden_layer, dtype = tf.float32
        )
        self.logits = tf.compat.v1.layers.dense(self.outputs[-1], output_size)
        self.cost = tf.reduce_mean(tf.square(self.Y - self.logits))
        self.optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate).minimize(
            self.cost
        ) 

我一直在尝试更改一些内容,例如:将tf.compat.v1.nn.rnn_cell.LSTMCell更改为tf.keras.layers.LSTMCell,将tf.compat.v1.nn.rnn_cell.MultiRNNCell更改为tf.keras.layers.StackedRNNCells,也将tf.compat.v1.nn.dynamic_rnn更改为tf.keras.layers.RNN < / p>

类似:

class Model:
    def __init__(
        self,
        learning_rate,
        num_layers,
        size,
        size_layer,
        output_size,
        forget_bias = 0.1,
    ):
        def lstm_cell(size_layer):
            return tf.keras.layers.LSTMCell(size_layer)

        rnn_cells = tf.keras.layers.StackedRNNCells(
            [lstm_cell(size_layer) for _ in range(num_layers)]
        )
        self.X = tf.compat.v1.placeholder(tf.float32, (None, None, size))
        self.Y = tf.compat.v1.placeholder(tf.float32, (None, output_size))
        drop = tf.compat.v1.nn.rnn_cell.DropoutWrapper(
            rnn_cells, output_keep_prob = forget_bias
        )
        self.hidden_layer = tf.compat.v1.placeholder(
            tf.float32, (None, num_layers * 2 * size_layer)
        )

        self.outputs, self.last_state = tf.keras.layers.RNN(
            drop, self.X, initial_state = self.hidden_layer, dtype = tf.float32
        )
        self.logits = tf.compat.v1.layers.dense(self.outputs[-1], output_size)
        self.cost = tf.reduce_mean(tf.square(self.Y - self.logits))
        self.optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate).minimize(
            self.cost
        )

但是结果仍然不符合预期。

0 个答案:

没有答案