我想用np数组初始化RNN的参数。
在下面的示例中,我想将w
传递给rnn
的参数。我知道pytorch提供了许多初始化方法,例如Xavier,uniform等,但是有没有办法通过传递numpy数组来初始化参数?
import numpy as np
import torch as nn
rng = np.random.RandomState(313)
w = rng.randn(input_size, hidden_size).astype(np.float32)
rnn = nn.RNN(input_size, hidden_size, num_layers)
答案 0 :(得分:3)
首先,请注意,nn.RNN
具有多个权重变量c.f。 documentation:
变量:
weight_ih_l[k]
–第k
层的可学习的输入隐藏权重,形状为(hidden_size * input_size)
的{{1}}。除此以外, 形状为k = 0
(hidden_size * hidden_size)
–第weight_hh_l[k]
层的可学习的隐藏权重,形状为k
(hidden_size * hidden_size)
–第bias_ih_l[k]
层的可学习的输入隐藏偏差,形状为k
(hidden_size)
–第bias_hh_l[k]
层的可学习的隐藏偏见,形状为k
现在,每个变量(Parameter
实例)都是(hidden_size)
实例的属性。您可以通过两种方式访问和编辑它们,如下所示:
nn.RNN
,Parameter
等)访问所有RNN rnn.weight_hh_lK
属性: rnn.weight_ih_lK
import torch
from torch import nn
import numpy as np
input_size, hidden_size, num_layers = 3, 4, 2
use_bias = True
rng = np.random.RandomState(313)
rnn = nn.RNN(input_size, hidden_size, num_layers, bias=use_bias)
def set_nn_parameter_data(layer, parameter_name, new_data):
param = getattr(layer, parameter_name)
param.data = new_data
for i in range(num_layers):
weights_hh_layer_i = rng.randn(hidden_size, hidden_size).astype(np.float32)
weights_ih_layer_i = rng.randn(hidden_size, hidden_size).astype(np.float32)
set_nn_parameter_data(rnn, "weight_hh_l{}".format(i),
torch.from_numpy(weights_hh_layer_i))
set_nn_parameter_data(rnn, "weight_ih_l{}".format(i),
torch.from_numpy(weights_ih_layer_i))
if use_bias:
bias_hh_layer_i = rng.randn(hidden_size).astype(np.float32)
bias_ih_layer_i = rng.randn(hidden_size).astype(np.float32)
set_nn_parameter_data(rnn, "bias_hh_l{}".format(i),
torch.from_numpy(bias_hh_layer_i))
set_nn_parameter_data(rnn, "bias_ih_l{}".format(i),
torch.from_numpy(bias_ih_layer_i))
列表属性访问所有RNN Parameter
属性: rnn.all_weights
答案 1 :(得分:2)
由于提供了详细的答案,我只想再增加一句话。 nn.Module
的参数是张量(以前,它曾经是autograd变量,which is deperecated in Pytorch 0.4)。因此,基本上,您需要使用torch.from_numpy()
方法将Numpy数组转换为Tensor,然后使用它们来初始化nn.Module
参数。