我正在做一些电力负荷预测,我想在其中初始化权重和偏差。我已经使用不同的算法计算了体重和偏倚,并将其保存在文件中。我想使用该文件,并使用这些体重和偏见开始训练。
这是我要更新的代码。
#RNN designning
tf.reset_default_graph()
inputs = 1 #input vector size
hidden = 100
output = 1 #output vector size
X = tf.placeholder(tf.float32, [None, num_periods, inputs])
y = tf.placeholder(tf.float32, [None, num_periods, output])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=hidden, activation=tf.nn.relu)
rnn_output, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
learning_rate = 0.001 #small learning rate so we don't overshoot the minimum
stacked_rnn_output = tf.reshape(rnn_output, [-1, hidden]) #change the form into a tensor
stacked_outputs = tf.layers.dense(stacked_rnn_output, output) #specify the type of layer (dense)
outputs = tf.reshape(stacked_outputs, [-1, num_periods, output]) #shape of results
loss = tf.reduce_mean(tf.square(outputs - y)) #define the cost function which evaluates the quality of our model
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) #gradient descent method
training_op = optimizer.minimize(loss) #train the result of the application of the cost_function
init = tf.global_variables_initializer() #initialize all the variables
epochs = 1000 #number of iterations or training cycles, includes both the FeedFoward and Backpropogation
mape = []
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
y_pred = {'NSW': [], 'QLD': [], 'SA': [], 'TAS': [], 'VIC': []}
for st in state.values():
print("State: ", st, end='\n')
with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
init.run()
for ep in range(epochs):
sess.run(training_op, feed_dict={X: x_batches[st], y: y_batches[st]})
if ep % 100 == 0:
mse = loss.eval(feed_dict={X: x_batches[st], y: y_batches[st]})
print(ep, "MSE:", mse)
y_pred[st] = sess.run(outputs, feed_dict={X: x_batches_test[st]})
print("\n")
我正在使用以下算法找到权重和偏差,并将其保存在weights
和biases
中作为列表列表。
class network:
def set_weight_bias(self, a):
lIt = 0
rIt = 0
self.weights = []
self.biases = []
for x,y in zip(self.sizes[1:], self.sizes[:-1]):
rIt += x*y
self.weights.append(a[lIt:rIt].reshape((x,y)))
lIt = rIt
for x in self.sizes[1:]:
rIt += x
self.biases.append(a[lIt:rIt].reshape((x,1)))
lIt = rIt
...
"""
Cuckoo Search Optimization
"""
def objectiveFunction(self,x):
self.set_weight_bias(x)
y_prime = self.feedforward(self.input)
return sum(abs(u-v) for u,v in zip(y_prime, self.output))/x.shape[0]
def cso(self, n, x, y, function, lb, ub, dimension, iteration, pa=0.25,
nest=100):
"""
:param n: number of agents
:param function: test function
:param lb: lower limits for plot axes
:param ub: upper limits for plot axes
:param dimension: space dimension
:param iteration: number of iterations
:param pa: probability of cuckoo's egg detection (default value is 0.25)
:param nest: number of nests (default value is 100)
"""
...
我想使用自定义权重和偏差开始训练,而不是通过tensorflow随机分配权重和偏差。如何在张量流中做到这一点?
答案 0 :(得分:1)
对于每一层,您可以参考文档以了解如何完成初始化:
答案 1 :(得分:1)
您要为RNN单元还是密集层设置权重?如果用于RNN单元,则应该能够使用set_weights方法设置权重。
如果用于Dense层,则应该能够分配Variable
并使用initializer
参数传递权重(另一个用于传递偏见)。然后,当您调用layers.dense
时,可以将变量张量分别传递给kernel_initializer
和bias_initializer
以获得权重和偏差。