用Keras手写数字识别

时间:2018-09-02 08:55:08

标签: machine-learning neural-network keras mnist

我正在尝试学习Keras。我看到了用于识别手写数字here(也给定here)的机器学习代码。它似乎从头编写了前馈,SGD和反向传播方法。我只想知道是否可以使用Keras编写此程序?将朝着这个方向迈出第一步。

1 个答案:

答案 0 :(得分:2)

您可以使用它来了解MNIST数据集如何首先用于MLP。Keras MNIST tutorial。在继续过程中,您可以研究CNN在MNIST数据集上的工作方式。

我将描述您附加到注释中的keras代码的一些过程

# Step 1: Organize Data
batch_size = 128   # This is split the 60k images into batches of 128, normally people use 100. It's up to you
num_classes = 10   # Your final layer. Basically number 0 - 9 (10 classes)
epochs = 20        # 20 'runs'. You can increase or decrease to see the change in accuracy. Normally MNIST accuracy peaks at around 10-20 epochs.

# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()  #X_train - Your training images, y_train - training labels; x_test - test images, y_test - test labels. Normally people train on 50k train images, 10k test images.

x_train = x_train.reshape(60000, 784) # Each MNIST image is 28x28 pixels. So you are flattening into a 28x28 = 784 array. 60k train images
x_test = x_test.reshape(10000, 784)   # Likewise, 10k test images
x_train = x_train.astype('float32')   # For float numbers
x_test = x_test.astype('float32')
x_train /= 255                        # For normalization. Each image has a 'degree' of darkness within the range of 0-255, so you want to reduce that range to 0 - 1 for your Neural Network
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)  # One-hot encoding. So when your NN is trained, your prediction for 5(example) will look like this [0000010000] (Final layer).
y_test = keras.utils.to_categorical(y_test, num_classes)


# Step 2: Create MLP model
model = Sequential()     
model.add(Dense(512, activation='relu', input_shape=(784,)))    #First hidden layer, 512 neurons, activation relu, input 784 array
model.add(Dropout(0.2))                                         # During the training, layer has 20% probability of 'switching off' certain neurons
model.add(Dense(512, activation='relu'))                        # Same as above
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))   # Final layer, 10 neurons, softmax is a probability function to give the best probability of the input image

model.summary()


# Step 3: Create model compilation
model.compile(loss='categorical_crossentropy',
              optimizer=RMSprop(),
              metrics=['accuracy'])
# 10 classes - categorical_crossentropy. If 2 classes, you can use binary_crossentropy; optimizer - RMSprop, you can change this to ADAM, SGD, etc...; metrics - accuracy


# Step 4: Train model
history = model.fit(x_train, y_train,
                    batch_size=batch_size,
                    epochs=epochs,
                    verbose=1,
                    validation_data=(x_test, y_test))
# Training happens here. Train on each batch size for 20 runs, the validate your result on the test set.

# Step 5: See results on your test data
score = model.evaluate(x_test, y_test, verbose=0)
# Prints out scores
print('Test loss:', score[0])
print('Test accuracy:', score[1])