如何合并两个不同的模型并在Tensorflow中训练?

时间:2018-07-12 07:05:21

标签: tensorflow deep-learning

我一直在寻找信息以使用Tensorflow实施深度学习模型,最后我在这里提出问题,因为我无法挖掘出来。这可能是一个非常基本的问题,但我希望您的答复。

enter image description here

enter image description here

import tensorflow as tf
import random
import os
import numpy as np
import time
import random
import csv
from random import shuffle

np.random.seed(1117)    
# for reproduct

# parameters
learning_rate = 1E-5 * 5

batch_size_SE = 2500
batch_size_STOI = 50

spl = 5
frames = 50
con_frame = 50

feature_dim = 256

nb_epoch = 50
layer_width = 2048

training_length = 500
validation_length = 50

keep_prob = tf.placeholder(tf.float32)

# SE_input/output placeholders
X = tf.placeholder(tf.float32, [batch_size_SE, feature_dim*(2*spl+1)])
Y = tf.placeholder(tf.float32, [batch_size_SE, feature_dim])

# STOI_input/output placeholders
STOI_feature = tf.placeholder(tf.float32, [batch_size_STOI, feature_dim*frames*2])
STOI_target = tf.placeholder(tf.float32, [batch_size_STOI, 1])



#########################Speech enhancement DNN#########################

# SE_1st Hidden layer
W11 = tf.get_variable("W11", shape=[(2*spl+1)*feature_dim,layer_width], initializer=tf.contrib.layers.xavier_initializer())
b11 = tf.Variable(tf.random_normal([layer_width]))
L11 = tf.nn.relu(tf.matmul(X, W11) + b11)
L11 = tf.nn.dropout(L11, keep_prob=keep_prob)

# SE_2nd Hidden layer   
W12 = tf.get_variable("W12", shape=[layer_width,layer_width], initializer=tf.contrib.layers.xavier_initializer())
b12 = tf.Variable(tf.random_normal([layer_width]))
L12 = tf.nn.relu(tf.matmul(L11, W12)+ b12)
L12 = tf.nn.dropout(L12, keep_prob=keep_prob)

# SE_3rd Hidden layer
W13 = tf.get_variable("W23", shape=[layer_width, layer_width], initializer=tf.contrib.layers.xavier_initializer())
b13 = tf.Variable(tf.random_normal([layer_width]))
L13 = tf.nn.relu(tf.matmul(L12, W13) + b13)
L13 = tf.nn.dropout(L13, keep_prob=keep_prob)

# SE_4th Hidden layer
W14 = tf.get_variable("W14", shape=[layer_width,layer_width], initializer=tf.contrib.layers.xavier_initializer())   
b14 = tf.Variable(tf.random_normal([layer_width]))
L14 = tf.nn.relu(tf.matmul(L13, W14)+ b14)
L14 = tf.nn.dropout(L14, keep_prob=keep_prob)

# enhanced_speech_output layer
W15 = tf.get_variable("W15", shape=[layer_width,feature_dim], initializer=tf.contrib.layers.xavier_initializer())   
b15 = tf.Variable(tf.random_normal([feature_dim]))
SE_hypothesis = tf.matmul(L14, W15) + b15



#########################STOI estimation DNN#########################

# STOI_1st Hidden layer
W21 = tf.get_variable("W21", shape=[feature_dim*frames*2,layer_width], initializer=tf.contrib.layers.xavier_initializer())
b21 = tf.Variable(tf.random_normal([layer_width]))
L21 = tf.nn.relu(tf.matmul(X, W21) + b21)
L21 = tf.nn.dropout(L21, keep_prob=keep_prob)

# STOI_2nd Hidden layer 
W22 = tf.get_variable("W22", shape=[layer_width,layer_width], initializer=tf.contrib.layers.xavier_initializer())
b22 = tf.Variable(tf.random_normal([layer_width]))
L22 = tf.nn.relu(tf.matmul(L1, W22)+ b22)
L22 = tf.nn.dropout(L22, keep_prob=keep_prob)

# STOI_3rd Hidden layer
W23 = tf.get_variable("W23", shape=[layer_width, layer_width], initializer=tf.contrib.layers.xavier_initializer())
b23 = tf.Variable(tf.random_normal([layer_width]))
L23 = tf.nn.relu(tf.matmul(L22, W23) + b23)
L23 = tf.nn.dropout(L23, keep_prob=keep_prob)

# STOI_4th Hidden layer
W24 = tf.get_variable("W24", shape=[layer_width,layer_width], initializer=tf.contrib.layers.xavier_initializer())   
b24 = tf.Variable(tf.random_normal([layer_width]))
L24 = tf.nn.relu(tf.matmul(L23, W24)+ b24)
L24 = tf.nn.dropout(L24, keep_prob=keep_prob)

# enhanced_speech_output layer
W25 = tf.get_variable("W25", shape=[layer_width,1], initializer=tf.contrib.layers.xavier_initializer()) 
b25 = tf.Variable(tf.random_normal([1]))
STOI_hypothesis = tf.matmul(L24, W25) + b25



#########################Cost function and optimizer#########################

SE_var_list = [W11, W12, W13, W14, W15, b11, b12, b13, b14, b15]

cost = tf.reduce_mean(tf.square(STOI_target - STOI_hypothesis))

optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost, var_list = SE_var_list)

saver = tf.train.Saver()

现在,我想使用一种训练两个神经工作模型的方法。 DNN1是我要训练的模型,而DNN2是我已经训练的模型。我试图用张量流构建两个模型的框架。但是,它有一条错误消息“尺寸必须相等,但必须为2816和25600”。我不认为我应该像传统的tensorflow DNN一样训练,所以我问我可以使用哪种方法来修改代码。

1 个答案:

答案 0 :(得分:0)

您在问一个非常广泛的问题。弹出的错误意味着您正在尝试将值馈入具有不同尺寸的张量。

我将坚持使用您在图片中发布的DNN,尽管自从开始以来似乎很难,但从外观上来说比较容易,我不会给您代码,但会为您提供操作说明:

    Scanner sc=new Scanner(System.in);

    char ch=sc.next().charAt(0);

    if(ch==' ') {

    int in=ch;

    System.out.println(in);

    }

}

此代码来自here,这是合并两个不同图形的关键。您需要做的是将训练有素的模型加载到第二张图中以达到您的目的,并且它应该可以工作。我挑战您尝试自己动手做,您会学到很多。如果您是TF的初学者,我建议您先尝试更轻松的方法,也许是MNIST?

希望我能帮助您!

PD:我很好奇,你能告诉我你要训练什么吗?

相关问题