TensorFlowJS中的损失非常高

时间:2020-03-26 14:56:38

标签: angular machine-learning deep-learning tensorflow.js loss

我正在关注此教学Time Series Forecasting with TensorFlow.js;我是数据科学和TensorflowJS的新手。

我认为我犯了一个错误,但是我找不到地方。 我正在训练模型上,并且损失非常大,从651878.125到412121。 我试图增加时期数,它继续减少但仍然很高(aroung 125000.000)。 那是正常的吗?我该如何更改才能使结果最接近0.0001?

我的变量是:WINDOW_SIZE:数字= 50,TRAINING_SIZE:数字= 0.85; // = 70%,EPOCHS:数字= 25,LEARNING_RATE:数字= 0.01,N_LAYERS:数字= 4。

价格从3800 $到9500 $,我有1000到2000条记录,在名为stockPrices = Array <{date,value}>的对象{date,value}数组中({date,value}是一个类名为“数据”)。

感谢您的帮助。

我的SMA功能:

computeSMA(stockPrices:Array<Data>, window_size:number): Promise<Array<any>> {
    let r_avgs = [], prices = [], avg_prev = 0;
    //Get only prices, witout dates:
    for (let i = 0; i < stockPrices.length; i++) {
      prices.push(stockPrices[i].value);
    }
    //SMA calculation on window_size:
    for (let i = 0; i <= prices.length - window_size; i++){
      let curr_avg = 0, t = i + window_size;
      for (let k = i; k < t && k <= prices.length; k++){
        curr_avg = curr_avg + prices[k];
      }
      curr_avg = curr_avg / window_size;
      r_avgs.push({ 
        set: prices.slice(i, i + window_size), 
        avg: curr_avg
      });
    }
    return new Promise((resolve, reject) => {
      if (r_avgs.length > 0) {
        resolve(r_avgs);
      } else {
        reject([]);
      }
    });
  }

对培训模型的呼吁:

  //After calling computeSMA, I separate inputs (arrays of window_size values) from outputs (SMA) :
  for (let i = 0; i < smaValues.length; i++) {
    this.inputs.push(smaValues[i]['set']);
    this.outputs.push(smaValues[i]['avg']);
    console.log(smaValues[i]['avg']);
  }
  //inputsToTrain and outputsToTrain = this.WINDOW_SIZE % of data for training the model.
  this.inputsToTrain = this.inputs.slice(0, Math.floor(this.TRAINING_SIZE * this.inputs.length));
  this.outputsToTrain = this.outputs.slice(0, Math.floor(this.TRAINING_SIZE * this.outputs.length));
  //inputsToValidate or outputsToValidate = (1 - this.WINDOW_SIZE) % of data for validating the model.
  this.inputsToValidate = this.inputs.slice(Math.floor(this.TRAINING_SIZE * this.inputs.length), this.inputs.length);
  this.outputsToValidate = this.outputs.slice(Math.floor(this.TRAINING_SIZE * this.outputs.length), this.outputs.length);
  //Train the model.
  this.trainModel(this.inputsToTrain, this.outputsToTrain, this.WINDOW_SIZE, this.EPOCHS, this.LEARNING_RATE, this.N_LAYERS, (epoch, log) => {
    console.log(`${'EPOCH '}${epoch + 1}${' / '}${this.EPOCHS}${' ; Loss = '}${log.loss}`);
  });

训练模型功能:

async trainModel(X, Y, window_size, n_epochs, learning_rate, n_layers, callback){

    //Entraîne le modèle.

    const input_layer_shape  = window_size;
    const input_layer_neurons = 100;

    const rnn_input_layer_features = 10;
    const rnn_input_layer_timesteps = input_layer_neurons / rnn_input_layer_features;

    const rnn_input_shape  = [rnn_input_layer_features, rnn_input_layer_timesteps];
    const rnn_output_neurons = 20;

    const rnn_batch_size = window_size;

    const output_layer_shape = rnn_output_neurons;
    const output_layer_neurons = 1;

    const model = tf.sequential();

    const xs = tf.tensor2d(X, [X.length, X[0].length]).div(tf.scalar(10));
    const ys = tf.tensor2d(Y, [Y.length, 1]).reshape([Y.length, 1]).div(tf.scalar(10));

    model.add(tf.layers.dense({units: input_layer_neurons, inputShape: [input_layer_shape]}));
    model.add(tf.layers.reshape({targetShape: rnn_input_shape}));

    let lstm_cells = [];
    for (let index = 0; index < n_layers; index++) {
        lstm_cells.push(tf.layers.lstmCell({units: rnn_output_neurons}));
    }

    model.add(tf.layers.rnn({
      cell: lstm_cells,
      inputShape: rnn_input_shape,
      returnSequences: false
    }));

    model.add(tf.layers.dense({units: output_layer_neurons, inputShape: [output_layer_shape]}));

    model.compile({
      optimizer: tf.train.adam(learning_rate),
      loss: 'meanSquaredError'
    });

    const hist = await model.fit(xs, ys,
      { batchSize: rnn_batch_size, epochs: n_epochs, callbacks: {
        onEpochEnd: async (epoch, log) => {
          callback(epoch, log);
        }
      }
    });

    return { model: model, stats: hist };
  }

0 个答案:

没有答案