将数据提供给fitDataset()

时间:2018-12-10 17:15:18

标签: tensorflow-datasets tensorflow.js

我正在尝试使用fitDataset()拟合模型。我可以使用“常规”方法进行训练,并带有for循环并获取随机的数据批次(20000个数据点)。

我想使用fitDataset()并能够使用整个数据集,而不依赖于我的getBatch函数的“随机性”。

使用API​​文档和tfjs-data上的示例,我越来越近了,但是,我被困在可能是愚蠢的数据操作上……

这就是我的做法:

const [trainX, trainY] = await bigData
  const model = await cnnLSTM // gru performing well

  const BATCH_SIZE = 32

  const dataSet =  flattenDataset(trainX.slice(200), trainY.slice(200))

  model.compile({
    loss: 'categoricalCrossentropy',
    optimizer: tf.train.adam(0.001),
    metrics: ['accuracy']
  })

  await model.fitDataset(dataSet.train.batch(32), {
    epochs: C.trainSteps,
    validationData: dataSet.validation,
    callbacks: {
      onBatchEnd: async (batch, logs) => (await tf.nextFrame()),
      onEpochEnd: (epoch, logs) => {
        let i = epoch + 1
        lossValues.push({'epoch': i, 'loss': logs.loss, 'val_loss': logs.val_loss, 'set': 'train'})    
        accuracyValues.push({'epoch': i, 'accuracy': logs.acc, 'val_accuracy': logs.val_acc, 'set': 'train'})
        // await md `${await plotLosses(train.lossValues)} ${await plotAccuracy(train.accuracyValues)}`
      }
    }
  })  

这是我对数据集创建的解释:

flattenDataset = (features, labels, split = 0.35) => {
  return tf.tidy(() => {
    let slice =features.length - Math.floor(features.length * split)
    const featuresTrain = features.slice(0, slice)
    const featuresVal = features.slice(slice)

    const labelsTrain = labels.slice(0, slice)
    const labelsVal = labels.slice(slice)

    const data = {
      train: tf.data.array(featuresTrain, labelsTrain),
      validation: tf.data.array(featuresVal, labelsVal)
    }

    return data
  })  
}

我遇到错误:

Error: Dataset iterator for fitDataset() is expected to generate an Array of length 2: `[xs, ys]`, but instead generates Tensor
    [[0.4106583, 0.5408, 0.4885066, 0.9021732, 0.1278526],
     [0.3711334, 0.5141, 0.4848816, 0.9021571, 0.2688071],
     [0.4336613, 0.5747, 0.4822159, 0.9021728, 0.3694479],
     ...,
     [0.4123166, 0.4553, 0.478438 , 0.9020132, 0.8797594],
     [0.3963479, 0.3714, 0.4871198, 0.901996 , 0.7170534],
     [0.4832076, 0.3557, 0.4892016, 0.9019232, 0.9999322]],Tensor
    [[0.3711334, 0.5141, 0.4848816, 0.9021571, 0.2688071],
     [0.4336613, 0.5747, 0.4822159, 0.9021728, 0.3694479],
     [0.4140858, 0.5985, 0.4789927, 0.9022084, 0.1912155],
     ...,

输入数据是具有5个维度的6个时间步长,标签只是一键编码的类[0,0,1],[0,1,0]和[1,0,0]。我猜flattenDataset()没有以正确的方式发送数据。

是否需要为每个数据点输出data.train [5个暗淡的6个时间步长,标签]?我尝试这样做时收到此错误:

Error: The feature data generated by the dataset lacks the required input key 'conv1d_Conv1D5_input'.

真的可以使用一些专业见解...

--------------------

编辑#1: 我觉得我已经接近答案了。

const X = tf.data.array(trainX.slice(0, 100))//.map(x => x)
  const Y = tf.data.array(trainY.slice(0, 100))//.map(x => x)

  const zip = tf.data.zip([X, Y])

  const dataSet = {
    train:  zip
  }

  dataSet.train.forEach(x => console.log(x))

有了这个,我进入了控制台:

[Array(6), Array(3)]
[Array(6), Array(3)]
[Array(6), Array(3)]
...
[Array(6), Array(3)]
[Array(6), Array(3)]

但是fitDataset给了我:Error: The feature data generated by the dataset lacks the required input key 'conv1d_Conv1D5_input'.

我的模型如下:

const model = tf.sequential()

  model.add(tf.layers.conv1d({
    inputShape: [6, 5],
    kernelSize: (3),
    filters: 64,
    strides: 1,
    padding: 'same',
    activation: 'elu',
    kernelInitializer: 'varianceScaling',
  }))

  model.add(tf.layers.maxPooling1d({poolSize: (2)}))

  model.add(tf.layers.conv1d({
    kernelSize: (1),
    filters: 64,
    strides: 1,
    padding: 'same',
    activation: 'elu'
  }))

  model.add(tf.layers.maxPooling1d({poolSize: (2)}))

  model.add(tf.layers.lstm({
    units: 18,
    activation: 'elu'
  }))  

  model.add(tf.layers.dense({units: 3, activation: 'softmax'}))

  model.compile({
    loss: 'categoricalCrossentropy',
    optimizer: tf.train.adam(0.001),
    metrics: ['accuracy']
  })

  return model

这是怎么了?

2 个答案:

答案 0 :(得分:2)

model.fitDataset期望的是Dataset,该数据集中的每个元素都是两个元素的元组,[feature, label]

因此,在您的情况下,您需要创建featureDataset和labelDataset,然后与tf.data.zip合并以创建trainDataset。验证数据集相同。

答案 1 :(得分:0)

已解决

因此,经过大量的尝试,我发现了一个使它起作用的方法。

所以,我的输入形状为[6,5],表示一个数组,其中每个数组由6个数组组成,每个数组包含5个浮点数。

[[[0.3467378, 0.3737, 0.4781905, 0.90665, 0.68142351],
[0.44003019602788285, 0.3106, 0.4864576, 0.90193448, 0.5841830879700972],
[0.30672944860847245, 0.3404, 0.490295674, 0.90720676, 0.8331748581920732],
[0.37475716007758336, 0.265, 0.4847249, 0.902056932, 0.6611207914113887],
[0.5639427928616854, 0.2423002, 0.483168235, 0.9020202294447865, 0.82823],
[0.41581425627336555, 0.4086, 0.4721923, 0.902094287, 0.914699]], ... 20k more]

我所做的是将数组变平为5维数组。然后将.batch(6)应用于它。

const BATCH_SIZE = 20 //batch size fed to the NN

const X = tf.data.array([].concat(...trainX)).batch(6).batch(BATCH_SIZE)
const Y = tf.data.array(trainY).batch(BATCH_SIZE)
const zip = tf.data.zip([X, Y])

const dataSet = {
  train: zip
}

希望它可以帮助其他人处理复杂数据!