Deeplearning4j LSTM示例

时间:2016-05-16 00:19:40

标签: neural-network deep-learning lstm deeplearning4j

我想在Deeplearning4j上了解LSTM。我正在检查示例的源代码,但我无法理解这一点。

        //Allocate space:
    //Note the order here:
    // dimension 0 = number of examples in minibatch
    // dimension 1 = size of each vector (i.e., number of characters)
    // dimension 2 = length of each time series/example
    INDArray input = Nd4j.zeros(currMinibatchSize,validCharacters.length,exampleLength);
    INDArray labels = Nd4j.zeros(currMinibatchSize,validCharacters.length,exampleLength);

为什么我们存储3D数组,这是什么意思?

1 个答案:

答案 0 :(得分:1)

好问题。但这与LSTM功能无关,而是处理任务本身。所以任务是预测,下一个角色是什么。下一个字符的预测本身有两个方面:分类和近似。 如果我们只处理近似,我们只能处理一维数组。但是如果我们同时处理近似和分类,我们就不能仅将归一化的ascii字符表示为神经网络。我们需要将每个字符转换为数组。

例如,a(非资本)将以这种方式表示:

1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 ,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0

b(非资本)将表示为: 0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 c将表示为:

0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 ,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0

Z(z资本!!!!)

0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 ,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1

所以,每个字符都给我们两个维数组。如何构建所有这些维度?代码评论有以下解释:

    // dimension 0 = number of examples in minibatch
    // dimension 1 = size of each vector (i.e., number of characters)
    // dimension 2 = length of each time series/example

我希望您真诚地赞扬您在理解LSTM如何工作方面的努力,但是您指出的代码给出了适用于所有类型NN的示例,并解释了如何在神经网络中处理文本数据,但没有解释LSTM如何作品。您需要查看源代码的另一部分。