如何使用numpy

时间:2019-01-06 00:37:32

标签: python-3.x numpy recurrent-neural-network backpropagation

我正在尝试在python中使用Numpy实施递归神经网络。我正在尝试为分类问题实现多对一RNN。我对伪代码(尤其是BPTT概念)有些模糊。我对正向通过感到满意(不确定我的实现是否正确),但确实对反向通过感到困惑,因此我需要该领域专家的一些建议。

我确实查看了相关帖子: 1)Implementing RNN in numpy

2)Output for RNN

3)How can I build RNN

但是我觉得我的问题是首先要了解伪代码/概念,这些帖子中的代码是完整的,并且比我的阶段更进一步。

“我的实现”的灵感来自该教程:

WildML RNN from scratch

我确实按照同一个作者的教程部分实现了前馈神经网络,但是我对他的这种实现感到非常困惑。 Ng的RNN视频建议了3种不同的权重(激活权重,输入和输出层权重),但以上教程仅包含两组权重(如果我输入错了,请纠正我)。

我的代码中的命名遵循Andrew Ng的RNN伪代码...

我正在将输入样本重塑为3D(batch_size,n_time步长,n_尺寸)...一次,我重塑了样本,我正在分别对每个样本进行正向传递...

这是我的代码:

def RNNCell(X, lr, y=None, n_timesteps=None, n_dimensions=None, return_sequence = None, bias = None):



'''Simple function to compute forward and bakward passes for a Many-to-One Recurrent  Neural Network Model.

This function Reshapes X,Y in to 3D array of shape (batch_size, n_timesteps, n_ dimensions) and then performs

recurrent operations on each sample of the data for n_timesteps'''

# If user has specified some target variable
if len(y) != 0:

    # No. of unique values in the target variables will be the dimesions for the output layer
    _,n_unique = np.unique(y, return_counts=True)

else:

    # If there's no target variable given, then dimensions of target variable by default is 2
    n_unique = 2


# Weights of Vectors to multiply with input samples
Wx = np.random.uniform(low = 0.0,

                       high = 0.3,

                      size = (n_dimensions, n_dimensions))

# Weights of  Vectors to multiply with resulting activations
Wy = np.random.uniform(low = 0.0,

                      high = 0.3,

                      size = (n_dimensions, n_timesteps))

# Weights of Vectors to multiple with activations of previous time steps
Wa = np.random.randn(n_dimensions, n_dimensions)


# List to hold activations of each time step
activations = {'a-0' : np.zeros(shape=(n_timesteps-1, n_dimensions),

                               dtype=float)}


# List to hold Yhat at each time step
Yhat = []

try:
    # Reshape X to align with the shape of RNN architecture
    X = np.reshape(X, newshape=(len(X), n_timesteps, n_dimensions))

except:

    return "Sorry can't reshape and array in to your shape"


def Forward_Prop(sample):

    # Outputs at the last time step
    Ot = 0

    # In each time step
    for time_step in range(n_timesteps+1):

        if time_step < n_timesteps:

            # activation G ( Wa.a<t> + X<t>.Wx )
            activations['a-' + str(time_step+1)] = ReLu( np.dot( activations['a-' + str(time_step)], Wa )   
                                            + np.dot(   sample[time_step, :].reshape(1, n_dimensions)    , Wx    ) )

        # IF it's the last time step then use softmax activation function
        elif time_step == n_timesteps:

            # Wy.a<t> and appending that to Yhat list
            Ot = softmax( np.dot( activations['a-' + str(time_step)], Wy ) ) 

    # Return output probabilities
    return Ot


def Backward_Prop(Yhat):

    # List to hold errors for the last layer
    error = []

    for ind in range(len(Yhat)):

        error.append(   y[ind] - Yhat[ind]  )

    error = np.array(error)

    # Calculating Delta for the output layer
    delta_out = error * lr 
    #* relu_derivative(activations['a-' + str(n_timesteps)])



    # Calculating gradient for the output layer
    grad_out = np.dot(delta_out.reshape(len(X), n_timesteps),
                      activations['a-' + str(n_timesteps)])


    # I'm basically stuck at this point

    # Adjusting weights for the output layer
    Wy = Wy - (lr * grad_out.reshape((n_dimesions, n_timesteps)))








for sample in X:

    Yhat.append( Forward_Prop(sample) )


Backward_Prop(Yhat)


return Yhat




  # DUMMY INPUT DATA
   X = np.random.random_integers(low=0, high = 5, size = (10, 10 ));

# DUMMY LABELS
y = np.array([[0],
             [1],
             [1],
             [1],
             [0],
             [0],
             [1],
             [1],
             [0],
             [1]])

我知道我的BPTT实现是错误的,但是我想的并不明确,我需要一些专家的观点来确定我到底错在哪里。我不希望对我的代码进行详细的调试,我只需要对反向传播中的伪代码有一个高层次的了解(假设我的正向道具是正确的)。我认为我的基本问题还可能在于我对每个样本进行正向传递的方式。

自3天以来,我一直在解决这个问题,而无法清晰思考的确令人沮丧。如果有人能指出正确的方向并消除我的困惑,我将不胜感激。谢谢您的时间提前!我真的很感激!

0 个答案:

没有答案