在张量流中微调神经网络

时间:2018-04-09 11:32:10

标签: python tensorflow machine-learning neural-network deep-learning

我一直致力于这个神经网络,旨在根据某些属性预测模拟风车公园的TBA(基于时间的可用性)。神经网络运行得很好,给了我一些预测,但是我对结果不太满意。它没有注意到我自己可以清楚看到的一些非常明显的相关性。这是我目前的代码:

`# Import
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

from sklearn.preprocessing import MinMaxScaler

maxi = 0.96 
mini = 0.7 


# Make data a np.array
data = pd.read_csv('datafile_ML_no_avg.csv')
data = data.values

# Shuffle the data
shuffle_indices = np.random.permutation(np.arange(len(data)))
data = data[shuffle_indices]

# Training and test data
data_train = data[0:int(len(data)*0.8),:]
data_test = data[int(len(data)*0.8):int(len(data)),:]

# Scale data
scaler = MinMaxScaler(feature_range=(mini, maxi))
scaler.fit(data_train)
data_train = scaler.transform(data_train)
data_test = scaler.transform(data_test)


# Build X and y
X_train = data_train[:, 0:5]
y_train = data_train[:, 6:7]
X_test = data_test[:, 0:5]
y_test = data_test[:, 6:7]

# Number of stocks in training data
n_args = X_train.shape[1]
multi = int(8)
# Neurons
n_neurons_1 = 8*multi
n_neurons_2 = 4*multi
n_neurons_3 = 2*multi
n_neurons_4 = 1*multi

# Session
net = tf.InteractiveSession()

# Placeholder
X = tf.placeholder(dtype=tf.float32, shape=[None, n_args])
Y = tf.placeholder(dtype=tf.float32, shape=[None,1])

# Initialize1s
sigma = 1
weight_initializer = tf.variance_scaling_initializer(mode="fan_avg",                             
distribution="uniform", scale=sigma)
bias_initializer = tf.zeros_initializer()

# Hidden weights
W_hidden_1 = tf.Variable(weight_initializer([n_args, n_neurons_1]))
bias_hidden_1 = tf.Variable(bias_initializer([n_neurons_1]))
W_hidden_2 = tf.Variable(weight_initializer([n_neurons_1, n_neurons_2]))
bias_hidden_2 = tf.Variable(bias_initializer([n_neurons_2]))
W_hidden_3 = tf.Variable(weight_initializer([n_neurons_2, n_neurons_3]))
bias_hidden_3 = tf.Variable(bias_initializer([n_neurons_3]))
W_hidden_4 = tf.Variable(weight_initializer([n_neurons_3, n_neurons_4]))
bias_hidden_4 = tf.Variable(bias_initializer([n_neurons_4]))

# Output weights
W_out = tf.Variable(weight_initializer([n_neurons_4, 1]))
bias_out = tf.Variable(bias_initializer([1]))

# Hidden layer
hidden_1 = tf.nn.relu(tf.add(tf.matmul(X, W_hidden_1), bias_hidden_1))
hidden_2 = tf.nn.relu(tf.add(tf.matmul(hidden_1, W_hidden_2),         
bias_hidden_2))
hidden_3 = tf.nn.relu(tf.add(tf.matmul(hidden_2, W_hidden_3),     
bias_hidden_3))
hidden_4 = tf.nn.relu(tf.add(tf.matmul(hidden_3, W_hidden_4), 
bias_hidden_4))

# Output layer (transpose!)
out = tf.transpose(tf.add(tf.matmul(hidden_4, W_out), bias_out))

# Cost function
mse = tf.reduce_mean(tf.squared_difference(out, Y))

# Optimizer
opt = tf.train.AdamOptimizer().minimize(mse)

# Init
net.run(tf.global_variables_initializer())

# Fit neural net
batch_size = 10
mse_train = []
mse_test = []

# Run
epochs = 10
for e in range(epochs):

# Shuffle training data
shuffle_indices = np.random.permutation(np.arange(len(y_train)))
X_train = X_train[shuffle_indices]
y_train = y_train[shuffle_indices]

# Minibatch training
for i in range(0, len(y_train) // batch_size):
    start = i * batch_size
    batch_x = X_train[start:start + batch_size]
    batch_y = y_train[start:start + batch_size]
    # Run optimizer with batch
    net.run(opt, feed_dict={X: batch_x, Y: batch_y})

    # Show progress
    if np.mod(i, 50) == 0:


        mse_train.append(net.run(mse, feed_dict={X: X_train, Y: y_train}))
        mse_test.append(net.run(mse, feed_dict={X: X_test, Y: y_test}))

        pred = net.run(out, feed_dict={X: X_test})

print(pred)`

尝试调整隐藏图层的数量,每层的节点数,运行的纪元数以及尝试不同的激活函数和优化器。但是,我对神经网络还很陌生,所以我可能会发现一些非常明显的东西。

提前感谢任何能够阅读所有内容的人。

1 个答案:

答案 0 :(得分:0)

您将共享一个用于说明问题的小数据集,这将变得更加容易。但是,我将陈述非标准数据集的一些问题以及如何克服它们。

可能的解决方案

  1. 基于正则化和验证的优化 - 在寻找一些额外的准确性时总是很好的尝试方法。请参阅辍学方法here(原始论文)和一些概述here

  2. 不平衡数据 - 有时,时间序列类别/事件的行为类似于异常,或者只是以不平衡的方式行事。如果你读了一本书, it 这样的词会比仓库等出现的次数多得多。如果您的主要任务是检测单词仓库并以传统方式训练您的网络(甚至是lstms),这可能会成为一个问题。克服这个问题的一种方法是平衡样本(创建平衡数据集)或者给予低频类别更多权重。

  3. 模型结构 - 有时完全连接的图层是不够的。例如,参见计算机视觉问题,我们使用卷积层进行训练。卷积和池化层在模型上强制执行结构,适用于图像。这也是某种规则,因为我们在这些层中的参数较少。在时间序列问题中,卷积也是可能的,结果证明工作得很好。请参阅条件Time Series Forecasting with Convolution Neural Networks

  4. 中的示例

    以上建议按照我建议尝试的顺序列出。

    祝你好运!

相关问题