我刚开始使用Tensorflow,试图为二进制分类创建一个经典的神经网络。
# Loading Dependencies
import math
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.python.framework import ops
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
seed = 1234
tf.set_random_seed(seed)
np.random.seed(seed)
# Load and Split data
data = pd.read_json(file)
X = data["X"]
y = data["y"]
X = X.astype(np.float32)
y = y.astype(np.float32)
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size = 0.3)
X_train = X_train.reshape(X_train.shape[0], -1).T
y_train = y_train.values.reshape((1, y_train.shape[0]))
X_valid = X_valid.reshape(X_valid.shape[0], -1).T
y_valid = y_valid.values.reshape((1, y_valid.shape[0]))
print("X Train: ", X_train.shape)
print("y Train: ", y_train.shape)
print("X Dev: ", X_valid.shape)
print("y Dev: ", y_valid.shape)
X火车:(16875,1122)
y火车:(1,1122)
X Dev:(16875,482)
y Dev:(1,482)
训练数据包含浮点数,而标签只是0或1.但是,这些也被转换为浮点数,因为我过去遇到了一些问题。
初始化参数
def initialize_parameters(layer_dimensions):
tf.set_random_seed(seed)
layers_count = len(layer_dimensions)
parameters = {}
for layer in range(1, layers_count):
parameters['W' + str(layer)] = tf.get_variable('W' + str(layer),
[layer_dimensions[layer], layer_dimensions[layer - 1]],
initializer = tf.contrib.layers.xavier_initializer(seed = seed))
parameters['b' + str(layer)] = tf.get_variable('b' + str(layer),
[layer_dimensions[layer], 1],
initializer = tf.zeros_initializer())
return parameters
形状是:
W1 - (50,16875)
W2 - (25,50)
W3 - (10,25)
W4 - (5,10)
W5 - (1,5)
b1 - (50,1)
b2 - (25,1)
b3 - (10,1)
b4 - (5,1)
b5 - (1,1)
我在调用模型时指定每个图层的数量和尺寸(见下文)
前向传播
def forward_propagation(X, parameters):
parameters_count = len(parameters) // 2
A = X
for layer in range(1, parameters_count):
W = parameters['W' + str(layer)]
b = parameters['b' + str(layer)]
Z = tf.add(tf.matmul(W, A), b)
A = tf.nn.relu(Z)
W = parameters['W' + str(parameters_count)]
b = parameters['b' + str(parameters_count)]
Z = tf.add(tf.matmul(W, A), b)
return Z
计算成本(我使用sigmoid函数,因为我们正在处理二进制分类)
def compute_cost(Z, Y):
logits = tf.transpose(Z)
labels = tf.transpose(Y)
cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = logits, labels = labels))
return cost
把它放在一起
def model(X_train, y_train, X_valid, y_valid, layer_dimensions, alpha = 0.0001, epochs = 10):
ops.reset_default_graph()
tf.set_random_seed(seed)
(x_rows, m) = X_train.shape
y_rows = y_train.shape[0]
costs = []
X = tf.placeholder(tf.float32, shape=(x_rows, None), name="X")
y = tf.placeholder(tf.float32, shape=(y_rows, None), name="y")
parameters = initialize_parameters(layer_dimensions)
Z = forward_propagation(X, parameters)
cost = compute_cost(Z, y)
optimizer = tf.train.AdamOptimizer(learning_rate = alpha).minimize(cost)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(epochs):
_ , epoch_cost = sess.run([optimizer, cost], feed_dict={X: X_train, y: y_train})
print ("Cost after epoch %i: %f" % (epoch + 1, epoch_cost))
costs.append(epoch_cost)
parameters = sess.run(parameters)
correct_predictions = tf.equal(tf.argmax(Z), tf.argmax(y))
accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, y: y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_valid, y: y_valid}))
return parameters
现在,当我尝试训练我的模型时,它似乎从第二个时期达到了最佳状态,并且从那时起成本变化很小
parameters = model(X_train, y_train, X_valid, y_valid, [X_train.shape[0], 50, 25, 10, 5, 1])
纪元1之后的成本:8.758244
纪元2之后的成本:0.693096
纪元3之后的成本:0.692992
纪元4之后的成本:0.692737
历元5之后的成本:0.697333
时代后的成本6:0.693062
7号时代后的成本:0.693151
纪元8之后的成本:0.693152
历元9之后的成本:0.693152
纪元10后的成本:0.693155
现在进行预测
def predict(X, parameters):
parameters_count = len(parameters) // 2
params = {}
for layer in range(1, parameters_count + 1):
params['W' + str(layer)] = tf.convert_to_tensor(parameters['W' + str(layer)])
params['b' + str(layer)] = tf.convert_to_tensor(parameters['b' + str(layer)])
(x_columns, x_rows) = X.shape
X_test = tf.placeholder(tf.float32, shape=(x_columns, x_rows))
Z = forward_propagation(X_test, params)
p = tf.argmax(Z)
sess = tf.Session()
prediction = sess.run(p, feed_dict = {X_test: X})
return prediction
然而,这将在每种情况下预测为0.
predictions = predict(X_valid, parameters)
predictions
数组([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0, 0,0,0,0,0 ....
答案 0 :(得分:0)
X Train: (16875, 1122)
每个样本有16875个功能,但只有1122个列车数据。 我认为这可能还不够。
tensorflow入门中的示例代码只需要784个功能。
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
MNIST数据分为三部分:55,000个训练数据点(mnist.train),10,000个测试数据点(mnist.test)和5,000个验证数据点(mnist.validation)。这种分裂非常重要:它在机器学习中至关重要,我们有单独的数据,我们不会从中学习,以便我们可以确保我们学到的东西实际上是一般化的! https://www.tensorflow.org/get_started/mnist/beginners