tensorflow CNN mnist示例训练精度在大迭代时从1意外下降到0.06

时间:2016-08-26 04:10:34

标签: tensorflow conv-neural-network gradient-descent mnist

在26700次迭代后,训练精度从1意外降至0.06。代码来自tensorflow的在线文档,我只是将过滤器大小从5x5修改为3x3,迭代次数从20000修改为100000,批量大小从50修改为100.任何机构都可以解释这个吗? 它可能与AdamOptimizer有关,因为如果将它更改为GradientDesentOptimizer,它甚至不会发生56200次迭代。但我不确定。 GradientDesentOptimizer也有这个问题。

private void startPreview(){
    SurfaceTexture surfaceTexture = mTextureView.getSurfaceTexture();
    surfaceTexture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
    Surface previewSurface = new Surface(surfaceTexture);

    try {
        mCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
        mCaptureRequestBuilder.addTarget(previewSurface);

        mCameraDevice.createCaptureSession(Arrays.asList(previewSurface, mImageReader.getSurface()),
                new CameraCaptureSession.StateCallback() {
                    @Override
                    public void onConfigured(CameraCaptureSession session) {
                        mPreviewCaptureSession = session;
                        try {
                            mPreviewCaptureSession.setRepeatingRequest(mCaptureRequestBuilder.build(),
                                    null, mBackgroundHandler);
                        } catch (CameraAccessException e) {
                            e.printStackTrace();
                        }
                    }

                    @Override
                    public void onConfigureFailed(CameraCaptureSession session) {
                        Toast.makeText(getApplicationContext(),
                                "Unable to setup camera preview", Toast.LENGTH_SHORT).show();

                    }
                }, null);
    } catch (CameraAccessException e) {
        e.printStackTrace();
    }
}

python代码:

step 26400, training accuracy 1, loss 0.00202696
step 26500, training accuracy 1, loss 0.0750173
step 26600, training accuracy 1, loss 0.0790716
step 26700, training accuracy 1, loss 0.0136688
step 26800, training accuracy 0.06, loss nan
step 26900, training accuracy 0.03, loss nan
step 27000, training accuracy 0.12, loss nan
step 27100, training accuracy 0.08, loss nan

1 个答案:

答案 0 :(得分:3)

我实际上只是遇到了CNN我正在接受培训的问题,经过一段时间的优化之后,它会把所有东西都拿出来。我认为正在发生的是数字稳定性问题,其中包含成本函数中的日志。当网络开始以高置信度进行预测时(意味着当网络训练并且实现更低成本时此问题变得更可能),y_conv向量将看起来像y_conv = [1, 0](忽略批处理)。这意味着记录log(y_conv) = log([1, 0]) = [0, -inf]。假设[1,0]也是正确的,所以当你执行y_ * tf.log(y_conv)时,你确实在做[1, 0] * [0, -inf] = [0, nan],因为它不知道如何乘以0和无穷大。增加这些成本会导致纳米成本。我想你可以通过在y_ * tf.log(y_conv + 1e-5)这样的日志中添加某种小的epislon来解决这个问题。我似乎已经使用tf.nn.sparse_softmax_cross_entropy_with_logits(...)修复了我的问题,这似乎解决了数字问题。