怪异的Convnet预测输出

时间:2017-06-24 17:13:59

标签: python numpy neural-network

我在Andrej Karpathy的帮助下建立了一个奇怪的问题。 培训过程顺利通过,我的验证准确率达到了99%..我用一批图像对它进行了多次测试,结果很好。
see picture here enter image description here 但是当我试图预测一个数字时,我的问题就出现了, 我的意思是当我将一批图像传递给预测时,预测将是准确的,但如果我传递给它一个图像(包含一个数字),预测将是奇怪的错误!它似乎总是'9'我不知道为什么! 我试图减少一批预装,我注意到当批量大小为3或更小时,预测会出错......

例如,对于此图片:(see here) enter image description here
预测所有数字一次给出这个结果:

 y=solver.predict(x.reshape(-1,1,28,28))

输出:

  

[1 2 3 4 5 6 8 7 9 8 8 8 8 8 0 0 6 6 6 7 7 7 7 7]

这是损失函数的输出:

In [2]: solver.model.loss(x.reshape(-1,1,28,28))


Out[2]: array([[ -3.05978141e-01,   2.25838572e+01,  -5.81781553e+00,
         -7.84227452e+00,  -2.46782551e+00,  -1.29618637e+00,
         -2.95035720e+00,  -2.26347248e+00,  -5.52651288e+00,
         -8.74067704e+00],

   [  2.18639209e+00,  -7.95115414e-01,   1.86981992e+01,
     -6.38840481e+00,  -7.44371465e+00,  -1.14834364e+01,
     -4.42105065e+00,  -2.51756685e+00,  -6.03314564e+00,
     -5.00079953e+00],
   [ -1.16809867e+01,  -3.90351457e+00,  -4.81452891e+00,
      1.72001078e+01,  -2.30417016e+00,   6.21031225e+00,
     -6.60536260e+00,  -3.35170447e+00,  -1.19414808e+01,
     -2.13268100e+00],
   [ -6.51288912e+00,  -1.27769600e-01,  -2.12484109e+00,
     -8.07526307e+00,   1.82578221e+01,  -8.02727136e+00,
     -3.95705307e+00,   1.50088109e+00,  -6.49268179e+00,
     -2.64582758e+00],
   [ -7.10982458e+00,  -1.23677393e+00,  -6.55376129e+00,
     -5.64977668e-01,  -1.66012764e+00,   1.39098500e+01,
     -1.82974320e+00,  -6.67535891e-02,  -6.25050582e+00,
      5.19481684e-01],
   [  1.97013317e+00,  -3.92168803e+00,  -2.50538345e+00,
     -4.53388791e+00,  -6.02692771e+00,   4.28872679e+00,
      9.32534533e+00,  -7.12121385e+00,  -1.27127814e+00,
      1.16859809e+00],
   [ -5.00874596e+00,  -8.22430042e+00,   2.82489907e+00,
      4.40183597e+00,  -1.18154837e+00,  -6.85617478e+00,
     -2.04355341e-02,  -3.68542346e+00,   8.40770795e+00,
     -3.89586477e+00],
   [ -3.76888300e+00,   4.36662753e+00,  -1.06721865e+00,
     -5.43658272e-01,  -5.73712938e+00,  -5.08607578e+00,
     -6.80629281e+00,   1.03611542e+01,  -6.12515023e+00,
     -2.67285495e+00],
   [ -5.45556785e+00,  -1.00982438e+01,  -1.86838461e+00,
     -6.25084617e+00,   2.61021988e+00,  -6.15225244e+00,
     -8.41166503e+00,  -4.13774685e+00,   7.20263813e-01,
      2.01130722e+01],
   [ -5.25623049e+00,  -6.90369741e+00,   3.26657435e+00,
      4.80984753e+00,  -7.93036997e+00,  -4.10551415e+00,
     -3.98611960e+00,  -5.98433243e+00,   1.67853904e+01,
     -6.04416010e+00],
   [ -3.73519354e+00,  -4.69000763e+00,   7.11005865e-01,
     -7.79817234e-01,   4.95981297e-01,  -2.20233790e+00,
     -2.32379619e+00,  -7.31904658e+00,   1.01815587e+01,
     -2.92697528e+00],
   [ -3.70550605e+00,  -6.27884772e+00,  -1.15562031e+00,
     -7.79128777e-01,  -4.54130713e+00,  -8.82680476e-01,
     -5.15643101e+00,  -6.08414457e+00,   1.61823980e+01,
     -3.31924674e+00],
   [ -3.21343014e+00,  -5.50310651e+00,  -6.46784579e+00,
     -4.71090597e-01,   2.30565155e+00,   1.71298558e+00,
     -3.80354454e+00,  -5.77908110e+00,   7.99886231e+00,
     -1.53263277e+00],
   [ -5.02752362e+00,  -6.41713821e+00,  -2.75882859e+00,
      2.02198061e+00,  -7.31649294e-01,  -4.09780260e+00,
     -2.87254341e+00,  -4.76868410e+00,   1.14782529e+01,
     -1.32010697e+00],
   [  6.58318513e+00,  -1.59361387e+00,  -4.07994824e+00,
     -5.69145251e+00,  -3.74416814e+00,   4.06304645e+00,
      6.06811801e+00,  -6.00753335e+00,  -3.02293776e+00,
     -3.07097095e+00],
   [  7.63995824e+00,  -4.60587221e+00,  -2.12479379e+00,
     -8.23367696e+00,  -2.25116385e+00,   5.15873864e+00,
      4.75356097e+00,  -5.97345978e+00,  -5.97171695e+00,
      2.29368218e+00],
   [  2.84535499e+00,  -2.04897197e+00,  -1.90798644e+00,
     -5.22885750e+00,  -7.08722417e-01,   1.73670140e+00,
      1.11489922e+01,  -7.04308564e+00,  -3.25250603e+00,
     -4.53417172e+00],
   [  2.03252275e+00,  -5.40529310e+00,  -3.53667596e+00,
     -1.73374097e+00,  -1.55853211e+00,   6.61614808e+00,
      1.03727007e+01,  -8.07445330e+00,  -3.62687170e+00,
     -7.68923877e+00],
   [  8.60866851e-01,  -2.61086735e+00,  -3.97864755e+00,
     -3.46964219e+00,  -1.53576350e+00,   4.34321103e-01,
      1.25051786e+01,  -6.28857105e+00,  -2.85388750e-02,
     -5.99032711e+00],
   [ -7.11119637e+00,   1.32192098e+00,  -2.85581429e-01,
     -4.09789623e+00,  -2.73542444e+00,  -5.05718720e+00,
     -5.84672064e+00,   1.10982186e+01,  -5.43396446e+00,
      1.38815026e+00],
   [ -4.45070011e+00,   6.63612661e+00,  -2.38272164e+00,
     -5.08418780e+00,  -4.23138974e+00,  -4.84060683e+00,
     -6.11011890e+00,   7.69417553e+00,  -4.36085869e+00,
      1.77336902e+00],
   [ -3.19741107e+00,   7.66654742e-01,   9.01185936e-01,
     -3.57527153e+00,  -5.35865469e+00,  -4.38987324e+00,
     -5.46174960e+00,   6.97829754e+00,  -2.74923129e+00,
      3.29389455e+00],
   [ -6.11877337e+00,   4.01989815e+00,  -9.16340782e-01,
     -1.81711754e+00,  -5.84678589e+00,  -4.57215096e+00,
     -5.60624601e+00,   1.10141198e+01,  -5.38853792e+00,
      2.88915005e-01],
   [ -5.12862280e+00,   3.51793625e+00,  -2.56816990e+00,
     -2.43665530e+00,  -2.69804915e+00,  -6.44862518e+00,
     -5.20372901e+00,   1.08616926e+01,  -6.62861963e+00,
      8.75721537e-02]])

这里是我开始减少批量大小的结果(预测批次):

请注意,损失(x [0])(数字'1')与第一行损失(X)(对应于相同的数字'1')不同,并且应该是完全相同的值!!

整批“损失(X)”的丢失是正确的......你可以看到,对于第一行,例如最大值对应于第1列,因此预测的数字是“1”,依此类推其余的..除了一些正常的预测错误。这不是偶然的,因为我已经尝试了很多例子。

In [7]: y=solver.predict(x[0:8].reshape(-1,1,28,28))
[1 2 3 4 5 6 8 7]

In [8]: y=solver.predict(x[0:4].reshape(-1,1,28,28))
[1 2 3 4]

In [9]: y=solver.predict(x[0:3].reshape(-1,1,28,28))
[1 2 5]

In [10]: y=solver.predict(x[0:2].reshape(-1,1,28,28))
[5 2]

In [11]: y=solver.predict(x[0:1].reshape(-1,1,28,28))
[9]

In [12]: solver.model.loss(x[0:1].reshape(-1,1,28,28))
Out[12]:
array([[-0.18494676, -0.09562021, -0.0050496 , -0.09319004, -0.01837853,
        -0.14772171, -0.11772445, -0.1030173 , -0.00983804,  0.00842318]])

主程序:

import time
import numpy as np
import matplotlib.pyplot as plt
from segm import segment
from cs231n.classifiers.fc_net import *
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_data
from cs231n.solver1 import Solver
from cs231n.fast_layers import *

plt.rcParams['figure.figsize'] = (10, 10) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

model = ConvNet(input_dim=(1, 28, 28),weight_scale=1e-2,reg=0)

solver = Solver()

#load saved trained convnet
solver.load_model('best1906.pkl')

#extract digits from loaded images    
lstChar = segment() 

x = np.asarray(lstChar)
# x need to to be processed here before making the forwardpass : 
y = solver.predict(x.reshape(-1,1,28,28))

你可以在这里找到程序:github。 COM / cthorey / CS231 /树/主/ assignment2 /

不完全相同,我添加了模块“segment”来从图像中提取数字,还有预测和load_model函数。

我希望我的解释足够明确。

提前多多感谢。

1 个答案:

答案 0 :(得分:0)

Okk ..我想通了。这是一个糟糕的batchnormalization层实现。因此,无论遇到这类问题,都必须确保能够很好地构思出火车模式和测试模式:

  • 在训练时跟踪运行平均值和运行方差。

  • 在测试模式下计算输出如下:

out =(x - running_mean)/ sqrt(running_var + epsilon)