我想创建一个NN模型,它接受两个输入,如果input_1< input_2我想要输出[[1,0]],否则我想要输出[[0,1]]。
我的尝试是
import numpy as np
import random
import pickle
import bitstring
import tensorflow as tf
# this is for converting numbers to binary form
def binary(num):
f1 = bitstring.BitArray(float=num, length=32)
return f1.bin
def num2bin(num):
return [int(x) for x in binary(num)[0:]]
N=60;
pos=10*np.random.rand(N)
pos1=10*np.random.rand(N)
pos1_test=10*np.random.rand(N)
pos_test=10*np.random.rand(N)
neg=-10*np.random.rand(N)
neg_test=-10*np.random.rand(N)
m=np.concatenate((pos,neg),axis=0)
b=np.concatenate((pos*0,pos1),axis=0)
m_test=np.concatenate((pos_test,neg_test),axis=0)
b_test=np.concatenate((pos*0,pos1_test),axis=0)
# this creates the training data.
def create_label_feature(m,m_test,b,b_test,test_size=0.1):
train_x=[]
train_y=[]
test_x=[]
test_y=[]
for x in m:
for y in b:
train_x +=[num2bin(x)+num2bin(y)]
if y !=0 and abs(x)<abs(y):
train_y+=[[1,0]]
elif y==0 and x>0:
train_y+=[[1,0]]
elif y==0 and x<=0:
train_y+=[[0,1]]
else:
train_y+=[[0,1]]
for x in m_test:
for y in b_test:
test_x +=[num2bin(x)+num2bin(y)]
if y !=0 and abs(x)<abs(y):
test_y+=[[1,0]]
elif y==0 and x>0:
test_y+=[[1,0]]
elif y==0 and x<=0:
test_y+=[[0,1]]
else:
test_y+=[[0,1]]
return train_x, train_y, test_x, test_y
train_x ,train_y ,test_x, test_y=create_label_feature(m,m_test,b,b_test,test_size=0.1)
这是实际的NN
n_nodes_hl1 = 1500
n_nodes_hl2= 1500
n_nodes_hl3= 1500
n_classes = 2
batch_size = 144
hm_epochs = 100
x = tf.placeholder('float',shape=[None,64])
y = tf.placeholder('float')
hidden_1_layer = {'f_fum':n_nodes_hl1,
'weight':tf.Variable(tf.random_normal([len(train_x[0]), n_nodes_hl1])),
'bias':tf.Variable(tf.random_normal([n_nodes_hl1]))}
hidden_2_layer = {'f_fum':n_nodes_hl2,
'weight':tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
'bias':tf.Variable(tf.random_normal([n_nodes_hl2]))}
hidden_3_layer = {'f_fum':n_nodes_hl3,
'weight':tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])),
'bias':tf.Variable(tf.random_normal([n_nodes_hl3]))}
output_layer = {'f_fum':None,
'weight':tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])),
'bias':tf.Variable(tf.random_normal([n_classes])),}
# Nothing changes
def neural_network_model(data):
l1 = tf.add(tf.matmul(data,hidden_1_layer['weight']), hidden_1_layer['bias'])
l1 = tf.nn.relu(l1)
l2 = tf.add(tf.matmul(l1,hidden_2_layer['weight']), hidden_2_layer['bias'])
l2 = tf.nn.relu(l2)
l3 = tf.add(tf.matmul(l2,hidden_2_layer['weight']), hidden_2_layer['bias'])
l3 = tf.nn.relu(l3)
output = tf.matmul(l3,output_layer['weight']) + output_layer['bias']
return output
saver = tf.train.Saver()
tf_log = 'tf.log'
def train_neural_network(x):
prediction = neural_network_model(x)
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y) )
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
for epoch in range(hm_epochs):
epoch_loss = 0
i=0
while i < len(train_x):
start = i
end = i+batch_size
batch_x = np.array(train_x[start:end])
batch_y = np.array(train_y[start:end])
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
epoch_loss += c
i+=batch_size
print('Epoch', epoch+1, 'completed out of',hm_epochs,'loss:',epoch_loss)
saver.save(sess, 'C:\\Users\\HP\\Documents\\Python Deep Learning Learning\\model_2.ckpt')
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy:',accuracy.eval({x:test_x, y:test_y}))
# a=num2bin(98)
# a=np.reshape(a,(1,64))
for k in [1,2,3,0,15]:
a=num2bin(k)
a=np.concatenate((a,num2bin(10)),axis=0)
a=np.reshape(a,(1,64))
a=sess.run(prediction, {x:a})
prediction_tensor = tf.nn.softmax(a)
print(sess.run(prediction_tensor))
train_neural_network(x)
但是当我运行这个时,我得到了
WARNING:tensorflow:From <ipython-input-43-bef948531bdc>:58: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
Epoch 1 completed out of 100 loss: 1656465.87708
Epoch 2 completed out of 100 loss: 965547.536972
Epoch 3 completed out of 100 loss: 523595.430435
Epoch 4 completed out of 100 loss: 977268.598053
Epoch 5 completed out of 100 loss: 668892.447758
Epoch 6 completed out of 100 loss: 533576.244034
Epoch 7 completed out of 100 loss: 447366.692417
Epoch 8 completed out of 100 loss: 201797.375396
Epoch 9 completed out of 100 loss: 123624.894434
Epoch 10 completed out of 100 loss: 74161.0360594
Epoch 11 completed out of 100 loss: 62320.2841816
Epoch 12 completed out of 100 loss: 82650.6948113
Epoch 13 completed out of 100 loss: 97856.969979
Epoch 14 completed out of 100 loss: 86694.9150848
Epoch 15 completed out of 100 loss: 104940.329952
Epoch 16 completed out of 100 loss: 55279.0731068
Epoch 17 completed out of 100 loss: 42239.2455358
Epoch 18 completed out of 100 loss: 26236.6220574
Epoch 19 completed out of 100 loss: 15605.1042328
Epoch 20 completed out of 100 loss: 7502.51530361
Epoch 21 completed out of 100 loss: 4281.0073503
Epoch 22 completed out of 100 loss: 2421.28072715
Epoch 23 completed out of 100 loss: 1674.25452423
Epoch 24 completed out of 100 loss: 1491.07880789
Epoch 25 completed out of 100 loss: 1245.5299934
Epoch 26 completed out of 100 loss: 932.400794029
Epoch 27 completed out of 100 loss: 182.700586021
Epoch 28 completed out of 100 loss: 246.078451872
Epoch 29 completed out of 100 loss: 295.789887428
Epoch 30 completed out of 100 loss: 144.528374732
Epoch 31 completed out of 100 loss: 95.6009635925
Epoch 32 completed out of 100 loss: 88.4947395325
Epoch 33 completed out of 100 loss: 201.378837585
Epoch 34 completed out of 100 loss: 0.0
Epoch 35 completed out of 100 loss: 0.0
Epoch 36 completed out of 100 loss: 0.0
Epoch 37 completed out of 100 loss: 0.0
Epoch 38 completed out of 100 loss: 0.0
Epoch 39 completed out of 100 loss: 0.0
Epoch 40 completed out of 100 loss: 0.0
Epoch 41 completed out of 100 loss: 0.0
Epoch 42 completed out of 100 loss: 0.0
Epoch 43 completed out of 100 loss: 0.0
Epoch 44 completed out of 100 loss: 0.0
Epoch 45 completed out of 100 loss: 0.0
Epoch 46 completed out of 100 loss: 0.0
Epoch 47 completed out of 100 loss: 0.0
Epoch 48 completed out of 100 loss: 0.0
Epoch 49 completed out of 100 loss: 0.0
Epoch 50 completed out of 100 loss: 0.0
Epoch 51 completed out of 100 loss: 0.0
Epoch 52 completed out of 100 loss: 0.0
Epoch 53 completed out of 100 loss: 0.0
Epoch 54 completed out of 100 loss: 0.0
Epoch 55 completed out of 100 loss: 0.0
Epoch 56 completed out of 100 loss: 0.0
Epoch 57 completed out of 100 loss: 0.0
Epoch 58 completed out of 100 loss: 0.0
Epoch 59 completed out of 100 loss: 0.0
Epoch 60 completed out of 100 loss: 0.0
Epoch 61 completed out of 100 loss: 0.0
Epoch 62 completed out of 100 loss: 0.0
Epoch 63 completed out of 100 loss: 0.0
Epoch 64 completed out of 100 loss: 0.0
Epoch 65 completed out of 100 loss: 0.0
Epoch 66 completed out of 100 loss: 0.0
Epoch 67 completed out of 100 loss: 0.0
Epoch 68 completed out of 100 loss: 0.0
Epoch 69 completed out of 100 loss: 0.0
Epoch 70 completed out of 100 loss: 0.0
Epoch 71 completed out of 100 loss: 0.0
Epoch 72 completed out of 100 loss: 0.0
Epoch 73 completed out of 100 loss: 0.0
Epoch 74 completed out of 100 loss: 0.0
Epoch 75 completed out of 100 loss: 0.0
Epoch 76 completed out of 100 loss: 0.0
Epoch 77 completed out of 100 loss: 0.0
Epoch 78 completed out of 100 loss: 0.0
Epoch 79 completed out of 100 loss: 0.0
Epoch 80 completed out of 100 loss: 0.0
Epoch 81 completed out of 100 loss: 0.0
Epoch 82 completed out of 100 loss: 0.0
Epoch 83 completed out of 100 loss: 0.0
Epoch 84 completed out of 100 loss: 0.0
Epoch 85 completed out of 100 loss: 0.0
Epoch 86 completed out of 100 loss: 0.0
Epoch 87 completed out of 100 loss: 0.0
Epoch 88 completed out of 100 loss: 0.0
Epoch 89 completed out of 100 loss: 0.0
Epoch 90 completed out of 100 loss: 0.0
Epoch 91 completed out of 100 loss: 0.0
Epoch 92 completed out of 100 loss: 0.0
Epoch 93 completed out of 100 loss: 0.0
Epoch 94 completed out of 100 loss: 0.0
Epoch 95 completed out of 100 loss: 0.0
Epoch 96 completed out of 100 loss: 0.0
Epoch 97 completed out of 100 loss: 0.0
Epoch 98 completed out of 100 loss: 0.0
Epoch 99 completed out of 100 loss: 0.0
Epoch 100 completed out of 100 loss: 0.0
Accuracy: 0.881944
[[ 1. 0.]]
[[ 1. 0.]]
[[ 1. 0.]]
[[ 1. 0.]]
[[ 1. 0.]]
起初我尝试使用较少的图层和训练数据,结果非常不准确。那么有哪些提示需要改进?
为什么在第33个时代之后损失会达到0。我如何进一步提高精神度。