TensorFlow-时代精度

时间:2018-07-05 11:19:25

标签: python tensorflow neural-network

我已经创建了以下脚本,该脚本正在运行,没有运行时或语法错误。我已阅读TensorFlow Iris教程,请参见下面的链接,并更改了数据以适合我的情况。

  

https://www.tensorflow.org/get_started/eager

我在训练神经网络时遇到问题。请参见下面的输出。问题是时代精度保持在87.805%。看起来不对。有任何想法吗?

TensorFlow version: 1.8.0
Eager execution: True
Local copy of the dataset file: /Users/me/.keras/datasets/training3.csv
<class 'str'>
/Users/me/.keras/datasets/training3.csv
example features: tf.Tensor([1.0231440e+06 4.3085021e-01 9.9667755e+02], shape=(3,), dtype=float32)
example label: tf.Tensor(1, shape=(), dtype=int32)
Epoch 000: Loss: 511239280185223872.000, Accuracy: 87.805%
Epoch 050: Loss: 0.693, Accuracy: 87.805%
Epoch 100: Loss: 0.547, Accuracy: 87.805%
Epoch 150: Loss: 0.465, Accuracy: 87.805%
Epoch 200: Loss: 0.436, Accuracy: 87.805%
Test set accuracy: 50.000%
Example 0 prediction: Win
Example 1 prediction: Win
Example 2 prediction: Win
Example 3 prediction: Win
Example 4 prediction: Win
Example 5 prediction: Win

以退出代码0结束的过程

enter image description here

from __future__ import absolute_import, division, print_function

import os
import matplotlib.pyplot as plt

#import nuralnetfeed

import tensorflow as tf
import tensorflow.contrib.eager as tfe



tf.enable_eager_execution()

print("TensorFlow version: {}".format(tf.VERSION))
print("Eager execution: {}".format(tf.executing_eagerly()))

train_dataset_url = "https://s3-us-west-2.amazonaws.com/topstock/training3.csv"

train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
                                           origin=train_dataset_url)

print("Local copy of the dataset file: {}".format(train_dataset_fp))

print(type(train_dataset_fp))
print(train_dataset_fp)

def parse_csv(line):
  example_defaults = [[0.], [0.], [0.], [0]]  # sets field types
  parsed_line = tf.decode_csv(line, example_defaults)
  # First 4 fields are features, combine into single tensor
  features = tf.reshape(parsed_line[:-1], shape=(3,))
  # Last field is the label
  label = tf.reshape(parsed_line[-1], shape=())
  return features, label

train_dataset = tf.data.TextLineDataset(train_dataset_fp)
train_dataset = train_dataset.skip(1)             # skip the first header row
train_dataset = train_dataset.map(parse_csv)      # parse each row
train_dataset = train_dataset.shuffle(buffer_size=1000)  # randomize
train_dataset = train_dataset.batch(32)

# View a single example entry from a batch
features, label = iter(train_dataset).next()
print("example features:", features[0])
print("example label:", label[0])

model = tf.keras.Sequential([
  tf.keras.layers.Dense(10, activation="relu", input_shape=(3,)),  # input shape required
  tf.keras.layers.Dense(10, activation="relu"),
  tf.keras.layers.Dense(3)
])

def loss(model, x, y):
  y_ = model(x)
  return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_)


def grad(model, inputs, targets):
  with tf.GradientTape() as tape:
    loss_value = loss(model, inputs, targets)
  return tape.gradient(loss_value, model.variables)

optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)

## Note: Rerunning this cell uses the same model variables

# keep results for plotting
train_loss_results = []
train_accuracy_results = []

num_epochs = 201

for epoch in range(num_epochs):
  epoch_loss_avg = tfe.metrics.Mean()
  epoch_accuracy = tfe.metrics.Accuracy()

  # Training loop - using batches of 32
  for x, y in train_dataset:
    # Optimize the model
    grads = grad(model, x, y)
    optimizer.apply_gradients(zip(grads, model.variables),
                              global_step=tf.train.get_or_create_global_step())

    # Track progress
    epoch_loss_avg(loss(model, x, y))  # add current batch loss
    # compare predicted label to actual label
    epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y)

  # end epoch
  train_loss_results.append(epoch_loss_avg.result())
  train_accuracy_results.append(epoch_accuracy.result())

  if epoch % 50 == 0:
    print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch,
                                                                epoch_loss_avg.result(),
                                                                epoch_accuracy.result()))

fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('Training Metrics')

axes[0].set_ylabel("Loss", fontsize=14)
axes[0].plot(train_loss_results)

axes[1].set_ylabel("Accuracy", fontsize=14)
axes[1].set_xlabel("Epoch", fontsize=14)
axes[1].plot(train_accuracy_results)

plt.show()


test_url = "https://s3-us-west-2.amazonaws.com/topstock/testing2.csv"

test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
                                  origin=test_url)

test_dataset = tf.data.TextLineDataset(test_fp)
test_dataset = test_dataset.skip(1)             # skip header row
test_dataset = test_dataset.map(parse_csv)      # parse each row with the funcition created earlier
test_dataset = test_dataset.shuffle(1000)       # randomize
test_dataset = test_dataset.batch(32)           # use the same batch size as the training set

test_accuracy = tfe.metrics.Accuracy()

for (x, y) in test_dataset:
  prediction = tf.argmax(model(x), axis=1, output_type=tf.int32)
  test_accuracy(prediction, y)

print("Test set accuracy: {:.3%}".format(test_accuracy.result()))

class_ids = ["Loss", "Win"]

predict_dataset = tf.convert_to_tensor([
    [4332764, 0.3379880634770722, 71.76971531894631,], #win TDG', 'TDG TransDigm Group Inc
    [3630825, 0.5672401324640435, 692.6105549543049,], #win VRTX', 'VRTX Vertex Pharmaceuticals Inc',
    [5341132, 0.8576748563051191, 912.0633006763521], # Win 'WFC', 'WFC Wells Fargo & Co',
    [3922072, 0.5501192399355537, 968.2525318845444,], # loss 'STZ', 'STZ Constellation Brands Inc',
    [2376482, 0.4639975398366861, 210.04078835390908,], # loss 'UA', 'UA Under Armour Inc',
    [1024461, 0.3999971388347513, 501.3113841343087] # loss  'UAA', 'UAA Under Armour Inc',
])

"""
All of the bottom are in the winners 

'TDG', 'TDG TransDigm Group Inc', 0.3379880634770722, 71.76971531894631, 4332764, 305.14, '2018-06-30', '2018-06-29', '2018-06-29'
'VRTX', 'VRTX Vertex Pharmaceuticals Inc', 0.5672401324640435, 692.6105549543049, 3630825, 160.34, '2018-06-30', '2018-06-29', '2018-06-29'
'WFC', 'WFC Wells Fargo & Co', 0.8576748563051191, 912.0633006763521, 5341132, 51.1, '2018-06-30', '2018-06-29', '2018-06-29'


All of these are losers 


'STZ', 'STZ Constellation Brands Inc', 0.5501192399355537, 968.2525318845444, 3922072, 218.47, '2018-06-30', '2018-06-29', '2018-06-29'
'UA', 'UA Under Armour Inc', 0.4639975398366861, 210.04078835390908, 2376482, 14.32, '2018-06-30', '2018-06-29', '2018-06-29'
'UAA', 'UAA Under Armour Inc', 0.3999971388347513, 501.3113841343087, 1024461, 16.44, '2018-06-30', '2018-06-29', '2018-06-29'

"""
predictions = model(predict_dataset)

for i, logits in enumerate(predictions):
  class_idx = tf.argmax(logits).numpy()
  name = class_ids[class_idx]
  print("Example {} prediction: {}".format(i, name))

您也可以在google colab上运行

  

https://colab.research.google.com/drive/1ho3wXMRSrt0bXrWRMhijFfWR1OgyJaZv

1 个答案:

答案 0 :(得分:0)

您无法正确计算精度,因为您是根据模型输出相对于标签的计算精度,但是模型输出对数(没有softmax激活),并且应该通过对类概率求argmax来计算精度,这意味着在应用softmax激活后。