Tflearn质量差的分类器?

时间:2017-10-20 18:23:08

标签: machine-learning tensorflow classification tflearn

我是机器学习新手并尝试 TFlearn ,因为它很简单。

我正在尝试制作一个我感兴趣的基本分类器。 我的目标是训练系统预测一个点的位置。

例如,如果我提供两个2D坐标(50,50)(51,51),系统必须预测方向是NE(东北)。 如果我提供(50,50)(49,49)系统必须预测方向是SW(西南)

输入: X1,Y1,X2,Y2,标签
输出: 0到8.对于8个方向。

所以这是我写的小代码,

from __future__ import print_function
import numpy as np
import tflearn
import tensorflow as tf
import time
from tflearn.data_utils import load_csv

#Sample input 50,50,51,51,5
data, labels = load_csv(filename, target_column=4,
                        categorical_labels=True, n_classes=8)

my_optimizer = tflearn.SGD(learning_rate=0.1)
net = tflearn.input_data(shape=[None, 4])
net = tflearn.fully_connected(net, 32) #input 4, output 32
net = tflearn.fully_connected(net, 32) #input 32, output 32
net = tflearn.fully_connected(net, 8, activation='softmax')
net = tflearn.regression(net,optimizer=my_optimizer)

model = tflearn.DNN(net)

model.fit(data, labels, n_epoch=100, batch_size=100000, show_metric=True)

model.save("direction-classifier.tfl")

我面临的问题是,即使我传递了大约4000万个输入样本,系统精度也低至20%。
我将输入限制为40-x-6040-y-60

我无法理解我是否过度拟合了样本,因为在整个培训期间,总计4000万输入的准确性从未高过

为什么这个简单的例子的准确度如此之低?

修改 我降低了学习率并使批量小。但是,结果仍然相同,准确性很差。 我已经包含了前25个步骤的输出。

--
Training Step: 100000  | total loss: 6.33983 | time: 163.327s
| SGD | epoch: 001 | loss: 6.33983 - acc: 0.0663 -- iter: 999999/999999
--
Training Step: 200000  | total loss: 6.84055 | time: 161.981ss
| SGD | epoch: 002 | loss: 6.84055 - acc: 0.1568 -- iter: 999999/999999
--
Training Step: 300000  | total loss: 5.90203 | time: 158.853ss
| SGD | epoch: 003 | loss: 5.90203 - acc: 0.1426 -- iter: 999999/999999
--
Training Step: 400000  | total loss: 5.97782 | time: 157.607ss
| SGD | epoch: 004 | loss: 5.97782 - acc: 0.1465 -- iter: 999999/999999
--
Training Step: 500000  | total loss: 5.97215 | time: 155.929ss
| SGD | epoch: 005 | loss: 5.97215 - acc: 0.1234 -- iter: 999999/999999
--
Training Step: 600000  | total loss: 6.86967 | time: 157.299ss
| SGD | epoch: 006 | loss: 6.86967 - acc: 0.1230 -- iter: 999999/999999
--
Training Step: 700000  | total loss: 6.10330 | time: 158.137ss
| SGD | epoch: 007 | loss: 6.10330 - acc: 0.1242 -- iter: 999999/999999
--
Training Step: 800000  | total loss: 5.81901 | time: 157.464ss
| SGD | epoch: 008 | loss: 5.81901 - acc: 0.1464 -- iter: 999999/999999
--
Training Step: 900000  | total loss: 7.09744 | time: 157.486ss
| SGD | epoch: 009 | loss: 7.09744 - acc: 0.1359 -- iter: 999999/999999
--
Training Step: 1000000  | total loss: 7.19259 | time: 158.369s
| SGD | epoch: 010 | loss: 7.19259 - acc: 0.1248 -- iter: 999999/999999
--
Training Step: 1100000  | total loss: 5.60177 | time: 157.221ss
| SGD | epoch: 011 | loss: 5.60177 - acc: 0.1378 -- iter: 999999/999999
--
Training Step: 1200000  | total loss: 7.16676 | time: 158.607ss
| SGD | epoch: 012 | loss: 7.16676 - acc: 0.1210 -- iter: 999999/999999
--
Training Step: 1300000  | total loss: 6.19163 | time: 163.711ss
| SGD | epoch: 013 | loss: 6.19163 - acc: 0.1635 -- iter: 999999/999999
--
Training Step: 1400000  | total loss: 7.46101 | time: 162.091ss
| SGD | epoch: 014 | loss: 7.46101 - acc: 0.1216 -- iter: 999999/999999
--
Training Step: 1500000  | total loss: 7.78055 | time: 158.468ss
| SGD | epoch: 015 | loss: 7.78055 - acc: 0.1122 -- iter: 999999/999999
--
Training Step: 1600000  | total loss: 6.03101 | time: 158.251ss
| SGD | epoch: 016 | loss: 6.03101 - acc: 0.1103 -- iter: 999999/999999
--
Training Step: 1700000  | total loss: 5.59769 | time: 158.083ss
| SGD | epoch: 017 | loss: 5.59769 - acc: 0.1182 -- iter: 999999/999999
--
Training Step: 1800000  | total loss: 5.45591 | time: 158.088ss
| SGD | epoch: 018 | loss: 5.45591 - acc: 0.0868 -- iter: 999999/999999
--
Training Step: 1900000  | total loss: 6.54951 | time: 157.755ss
| SGD | epoch: 019 | loss: 6.54951 - acc: 0.1353 -- iter: 999999/999999
--
Training Step: 2000000  | total loss: 6.18566 | time: 157.408ss
| SGD | epoch: 020 | loss: 6.18566 - acc: 0.0551 -- iter: 999999/999999
--
Training Step: 2100000  | total loss: 4.95146 | time: 157.572ss
| SGD | epoch: 021 | loss: 4.95146 - acc: 0.1114 -- iter: 999999/999999
--
Training Step: 2200000  | total loss: 5.97208 | time: 157.279ss
| SGD | epoch: 022 | loss: 5.97208 - acc: 0.1277 -- iter: 999999/999999
--
Training Step: 2300000  | total loss: 6.75645 | time: 157.201ss
| SGD | epoch: 023 | loss: 6.75645 - acc: 0.1507 -- iter: 999999/999999
--
Training Step: 2400000  | total loss: 7.04119 | time: 157.346ss
| SGD | epoch: 024 | loss: 7.04119 - acc: 0.1512 -- iter: 999999/999999
--
Training Step: 2500000  | total loss: 5.95451 | time: 157.722ss
| SGD | epoch: 025 | loss: 5.95451 - acc: 0.1421 -- iter: 999999/999999

2 个答案:

答案 0 :(得分:1)

正如我在上面的评论中所讨论的,这里是使用a MLP helper class I created训练多层感知器分类器模型的代码。该类使用TensorFlow实现,并遵循scikit-learn fit,predict,score interface。

基本思想是生成一个随机的起点和终点,然后使用字典根据方向创建标签。我使用np.unique来查找生成数据中的类标签数量,因为它可能会有所不同(某些方向可能会丢失)。当开始和结束点相同时,我还包括一个空字符串标签。

代码

使用下面的代码,我可以在某些运行中实现100%的交叉验证准确性。     导入numpy为np     来自sklearn.model_selection导入ShuffleSplit     来自TFANN导入MLPC

imageSet = uint8(squeeze(imageStore(:, :, :, i, :)));
montImage = cell2mat(reshape(num2cell(imageSet, 1:3), [3 5]));
imshow(montImage);

结果

这是我机器上面代码的示例运行:

#Dictionary to lookup direction ()
DM = {(-1, -1):'SW', (-1, 0):'W', (-1,  1):'NW', (0,  1):'N', 
      ( 1,  1):'NE', ( 1, 0):'E', ( 1, -1):'SE', (0, -1):'S',
      ( 0,  0):''}

NR = 4096       #Number of rows in sample matrix
A1 = np.random.randint(40, 61, size = (NR, 2))      #Random starting point
A2 = np.random.randint(40, 61, size = (NR, 2))      #Random ending point
A = np.hstack([A1, A2])         #Concat start and end point as feature vector
#Create label from direction vector
Y = np.array([DM[(x, y)] for x, y in (A2 - A1).clip(-1, 1)])
NC = len(np.unique(Y))          #Number of classes
ss = ShuffleSplit(n_splits = 1)
trn, tst = next(ss.split(A))    #Make a train/test split for cross-validation
#%% Create and train Multi-Layer Perceptron for Classification (MLPC)
l = [4, 6, 6, NC]       #Neuron counts in each layer
mlpc = MLPC(l, batchSize = 64, maxIter = 128, verbose = True)
mlpc.fit(A[trn], Y[trn])
s1 = mlpc.score(A[trn], Y[trn])     #Training accuracy
s2 = mlpc.score(A[tst], Y[tst])     #Testing accuracy
s3 = mlpc.score(A, Y)               #Total accuracy
print('Trn: {:05f}\tTst: {:05f}\tAll: {:05f}'.format(s1, s2, s3))

答案 1 :(得分:1)

原来,优化器导致了所有问题。删除自定义优化程序后,损失开始正常下降,准确率提高到99%

必须修改以下两行。

my_optimizer = tflearn.SGD(learning_rate=0.1)
net = tflearn.regression(net,optimizer=my_optimizer)

替换为

net = tflearn.regression(net)

产生了完美的结果。