新形状和旧形状必须具有相同数量的元素

时间:2018-10-09 21:16:05

标签: javascript tensorflow machine-learning classification tensorflow.js

出于学习目的,我使用Tensorflow.js,尝试将fit方法与批处理数据集(10乘10)一起使用以了解批处理过程时遇到错误。

我有一些要分类的图像600x600x3(2个输出,1或0)

这是我的训练循环:

  const batches = await loadDataset()

  for (let i = 0; i < batches.length; i++) {
    const batch = batches[i]
    const xs = batch.xs.reshape([batch.size, 600, 600, 3])
    const ys = tf.oneHot(batch.ys, 2)

    console.log({
      xs: xs.shape,
      ys: ys.shape,
    })
    // { xs: [ 10, 600, 600, 3 ], ys: [ 10, 2 ] }

    const history = await model.fit(
      xs, ys,
      {
        batchSize: batch.size,
        epochs: 1
      }) // <----- The code throws here

    const loss = history.history.loss[0]
    const accuracy = history.history.acc[0]

    console.log({ loss, accuracy })
  }

这是我定义数据集的方式

const chunks = chunk(examples, BATCH_SIZE)

const batches = chunks.map(
  batch => {
    const ys = tf.tensor1d(batch.map(e => e.y), 'int32')
    const xs = batch
      .map(e => imageToInput(e.x, 3))
      .reduce((p, c) => p ? p.concat(c) : c)
    return { size: batch.length, xs , ys }
  }
)

这是模型:

const model = tf.sequential()
model.add(tf.layers.conv2d({
  inputShape: [600, 600, 3],
  kernelSize: 60,
  filters: 50,
  strides: 20,
  activation: 'relu',
  kernelInitializer: 'VarianceScaling'
}))
model.add(tf.layers.maxPooling2d({
  poolSize: [20, 20],
  strides: [20, 20]
}))
model.add(tf.layers.conv2d({
  kernelSize: 5,
  filters: 100,
  strides: 20,
  activation: 'relu',
  kernelInitializer: 'VarianceScaling'
}))

model.add(tf.layers.maxPooling2d({
  poolSize: [20, 20],
  strides: [20, 20]
}))
model.add(tf.layers.flatten())
model.add(tf.layers.dense({
  units: 2,
  kernelInitializer: 'VarianceScaling',
  activation: 'softmax'
}))

在for循环的第一次迭代过程中,我从.fit得到了一个错误,如下所示:

Error: new shape and old shape must have the same number of elements.
    at Object.assert (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/util.js:36:15)
    at reshape_ (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/ops/array_ops.js:271:10)
    at Object.reshape (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/ops/operation.js:23:29)
    at Tensor.reshape (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/tensor.js:273:26)
    at Object.derB [as $b] (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/ops/binary_ops.js:32:24)
    at _loop_1 (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/tape.js:90:47)
    at Object.backpropagateGradients (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/tape.js:108:9)
    at /Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/engine.js:334:20
    at /Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/engine.js:91:22
    at Engine.scopedRun (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/engine.js:101:23)

我不知道从中学到什么,也找不到关于该特定错误的文档或帮助,知道吗?

1 个答案:

答案 0 :(得分:2)

模型的问题在于{}convolution一起应用的方式

第一层正在使用步幅为[20,20]和50个过滤器的kernelSize 60进行卷积。 该层的输出将具有近似形状maxPooling

应用最大池时跨度为[600 / 20, 600 / 20, 50] = [30, 30, 50]。该层的输出也将具有近似形状[20, 20]

从这一步开始,模型不再可以使用kernelSize 5进行卷积。因为内核形状[30 / 20, 30 / 20, 50] =[1, 1, 50 ]大于输入形状[5, 5],导致引发错误。该模型只能执行的卷积是大小为1的内核的卷积。显然,该卷积将在不进行任何变换的情况下输出输入。

相同的规则适用于[1, 1]不能与1相同的最后一个maxPooling,否则将引发错误。

这是一个代码段:

poolingSize
const model = tf.sequential()
model.add(tf.layers.conv2d({
  inputShape: [600, 600, 3],
  kernelSize: 60,
  filters: 50,
  strides: 20,
  activation: 'relu',
  kernelInitializer: 'VarianceScaling'
}))
model.add(tf.layers.maxPooling2d({
  poolSize: [20, 20],
  strides: [20, 20]
}))
model.add(tf.layers.conv2d({
  kernelSize: 1,
  filters: 100,
  strides: 20,
  activation: 'relu',
  kernelInitializer: 'VarianceScaling'
}))

model.add(tf.layers.maxPooling2d({
  poolSize: 1,
  strides: [20, 20]
}))
model.add(tf.layers.flatten())
model.add(tf.layers.dense({
  units: 2,
  kernelInitializer: 'VarianceScaling',
  activation: 'softmax'
}))

model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});
model.fit(tf.ones([10, 600, 600, 3]), tf.ones([10, 2]), {batchSize: 4});

model.predict(tf.ones([1, 600, 600, 3])).print()