如何在张量流中实现验证数据?

时间:2019-07-08 08:22:49

标签: tensorflow

我是机器学习领域的新手,我刚刚开始使用tensorflow。我仅使用训练和测试数据集就完成了LSTM多类分类模型。现在,我想使用验证数据集来提高模型的准确性。但是,我在网上找不到太多信息可以告诉我如何拆分验证,训练和测试数据集。

当我将数据分为训练和测试时,我正在考虑创建验证数据。但是,我在训练或评估模型时不知道如何实现这些功能。 如果您可以举一个例子来说明如何实现这些功能,那将是一个很大的帮助!

到目前为止,这是我的代码。

df = pd.read_csv('new_df.csv', skiprows=[0], header=None)
df.drop(columns=[0,1], inplace=True)
df.columns = [np.arange(0, df.shape[1])]
df[0] = df[0].shift(-1)
print(df.head())
#parameters
time_steps = 1
inputs = df.shape[1]
outputs = 3

#remove nan as a result of shift values

df = df.iloc[:-1, :]

#convert to numpy
df = df.values


train_number = 61925 #start date from 20190419
train_x = df[: train_number, 1:]
test_x = df[train_number:, 1:]
train_y = df[:train_number, 0]
test_y = df[train_number:, 0]
#data pre-processing

#x y split
#scale
scaler = MinMaxScaler(feature_range=(0,1))
train_x = scaler.fit_transform(train_x)
test_x = scaler.fit_transform(test_x)

#reshape into 3d array
train_x = train_x[:, None, :]
test_x = test_x[:, None, :]

#one-hot encode the outputs
onehot_encoder = OneHotEncoder()
#encoder = LabelEncoder()
max_ = train_y.max()
max2 = test_y.max()
train_y = (train_y - max_) * (-1)
test_y = (test_y - max2) * (-1)
encode_categorical = train_y.reshape(len(train_y), 1)
encode_categorical2 = test_y.reshape(len(test_y), 1)
train_y = onehot_encoder.fit_transform(encode_categorical).toarray()
test_y = onehot_encoder.fit_transform(encode_categorical2).toarray()

print(train_x.shape, train_y.shape, test_x.shape, test_y.shape)


#model parameters

learning_rate = 0.001
epochs = 100
batch_size = int(train_x.shape[0]/10)
length = train_x.shape[0]
display = 100
neurons = 60


tf.set_random_seed(1234)
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, time_steps, 90],name='x')
y = tf.placeholder(tf.float32, [None, outputs],name='y')

#LSTM cell
cell = tf.contrib.rnn.BasicLSTMCell(num_units = neurons, activation = tf.nn.relu)
cell_outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)

# pass into Dense layer
stacked_outputs = tf.reshape(cell_outputs, [-1, neurons])
out = tf.layers.dense(inputs=stacked_outputs, units=outputs)
# squared error loss or cost function for linear regression
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=out, labels=y))

# optimizer to minimize cost
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)

accuracy = tf.metrics.accuracy(labels =  tf.argmax(y, 1), predictions = tf.argmax(out, 1), name = "accuracy")
precision = tf.metrics.precision(labels=tf.argmax(y, 1), predictions=tf.argmax(out, 1), name="precision")
recall = tf.metrics.recall(labels=tf.argmax(y, 1), predictions=tf.argmax(out, 1),name="recall")
f1 = 2 * accuracy[1] * recall[1] / ( precision[1] + recall[1] )


with tf.Session() as sess:
    # initialize all variables
    tf.global_variables_initializer().run()
    tf.local_variables_initializer().run()

    # Train the model
    for steps in range(epochs):
        mini_batch = zip(range(0, length, batch_size), range(batch_size, length+1, batch_size))
        epoch_loss = 0
        i = 0

        # train data in mini-batches
        for (start, end) in mini_batch:

            sess.run(training_op, feed_dict = {X: train_x[start:end,:,:], y: train_y[start:end,:]})


        # print training performance
        if (steps+1) % display == 0:
            # evaluate loss function on training set
            loss_fn = loss.eval(feed_dict = {X: train_x, y: train_y})
            print('Step: {}  \tTraining loss: {}'.format((steps+1), loss_fn))

0 个答案:

没有答案