改组训练数据会导致模型表现不佳吗?

时间:2019-11-29 13:40:21

标签: c++ neural-network shuffle mean-square-error function-approximation

我正在用C ++语言编写一个神经网络,以使用具有5个隐藏神经元的单个隐藏层来近似 xSin(x)函数。隐藏的神经元使用 tanh 激活,输出层使用 Linear 激活。我用了30个训练示例进行了10,000个纪元的训练。

直到我重新整理数据,这是我得到的: enter image description here (红色:预测数据,绿色:实际数据),MSE也接近0

但是,当我对训练示例的索引进行混洗并验证我的混洗确实进行了混洗时,我得到了可怕的结果:

enter image description here

“错误与时代”为:

enter image description here

可能会出什么问题?改组可以为此负责吗?

这是供参考的简单代码

//Shuffle Function 
void shuffle(int *array, size_t n)
{
    if (n > 1) //If no. of training examples > 1
    {
        size_t i;
        for (i = 0; i < n - 1; i++)
        {
            size_t j = i + rand() / (RAND_MAX / (n - i) + 1);
            int t = array[j];
            array[j] = array[i];
            array[i] = t;
        }
    }
}


int main(int argc, const char * argv[])
{
    //Some other actions

    ///FOR INDEX SHUFFLING
    int trainingSetOrder[numTrainingSets];
    for(int j=0; j<numTrainingSets; ++j)
        trainingSetOrder[j] = j;


    ///TRAINING
    //std::cout<<"start train\n";
    vector<double> performance, epo; ///STORE MSE, EPOCH
    for (int n=0; n < epoch; n++)
    {

        shuffle(trainingSetOrder,numTrainingSets);
         for (int i=0; i<numTrainingSets; i++)
        {
            int x = trainingSetOrder[i];
            //cout<<" "<<"("<<training_inputs[x][0]<<","<<training_outputs[x][0] <<")";

            /// Forward pass
            for (int j=0; j<numHiddenNodes; j++)
            {
                double activation=hiddenLayerBias[j];
                //std::cout<<"Training Set :"<<x<<"\n";
                 for (int k=0; k<numInputs; k++) {
                    activation+=training_inputs[x][k]*hiddenWeights[k][j];
                }
                hiddenLayer[j] = tanh(activation);
            }

            for (int j=0; j<numOutputs; j++) {
                double activation=outputLayerBias[j];
                for (int k=0; k<numHiddenNodes; k++)
                {
                    activation+=hiddenLayer[k]*outputWeights[k][j];
                }
                outputLayer[j] = lin(activation);
            }



           /// Backprop
           ///   For V
            double deltaOutput[numOutputs];
            for (int j=0; j<numOutputs; j++) {
                double errorOutput = (training_outputs[i][j]-outputLayer[j]);
                deltaOutput[j] = errorOutput*dlin(outputLayer[j]);
            }

            ///   For W
           //Some Code

            ///Updation
            ///   For V and b
            ///Some Code

            ///   For W and c
            for (int j=0; j<numHiddenNodes; j++) {
                //c
                hiddenLayerBias[j] += deltaHidden[j]*lr;
                //W
                for(int k=0; k<numInputs; k++) {
                  hiddenWeights[k][j]+=training_inputs[i][k]*deltaHidden[j]*lr;
                }
            }
        }
      }


    return 0;
}

2 个答案:

答案 0 :(得分:0)

您的模型似乎没有随机初始化,因为init_weight函数

double init_weight() { return (2*rand()/RAND_MAX -1); }

几乎总是返回-1,因为它执行整数除法。这样的初始化可能会使模型很难训练或无法训练。

修复:

double init_weight() { return 2. * rand() / RAND_MAX - 1; }

上面的2.的类型为double,这触发了二进制运算符中涉及的其他整数项的提升,从而提升为double


Xavier initialization是加快训练速度的好方法。

答案 1 :(得分:0)

我在训练部分发现2个错误(愚蠢!):

1)

/// Backprop
           ///   For V
            double deltaOutput[numOutputs];
            for (int j=0; j<numOutputs; j++) {
                double errorOutput = (training_outputs[i][j]-outputLayer[j]);
                deltaOutput[j] = errorOutput*dlin(outputLayer[j]);
            }

应该是

/// Backprop
           ///   For V
            double deltaOutput[numOutputs];
            for (int j=0; j<numOutputs; j++) {
                double errorOutput = (training_outputs[x][j]-outputLayer[j]);
                deltaOutput[j] = errorOutput*dlin(outputLayer[j]);
            }

2)

///   For W and c
            for (int j=0; j<numHiddenNodes; j++) {
                //c
                hiddenLayerBias[j] += deltaHidden[j]*lr;
                //W
                for(int k=0; k<numInputs; k++) {
                  hiddenWeights[k][j]+=training_inputs[i][k]*deltaHidden[j]*lr;
                }
            }

应该是

///   For W and c
            for (int j=0; j<numHiddenNodes; j++) {
                //c
                hiddenLayerBias[j] += deltaHidden[j]*lr;
                //W
                for(int k=0; k<numInputs; k++) {
                  hiddenWeights[k][j]+=training_inputs[x][k]*deltaHidden[j]*lr;
                }
            }

发布,我知道 enter image description hereenter image description here