这是深度自动编码器的摘要:
class Deep_Autoencoder:
def __init__(self, input_dim, n_nodes_hl = (32, 16, 1), epochs = 400, batch_size = 128, learning_rate = 0.02, n_examples = 10):
# Hyperparameters
self.input_dim = input_dim
self.epochs = epochs
self.batch_size = batch_size
self.learning_rate = learning_rate
self.n_examples = n_examples
# Input and target placeholders
X = tf.placeholder('float', [None, self.input_dim])
Y = tf.placeholder('float', [None, self.input_dim])
...
self.X = X
print("self.X : ", self.X)
self.Y = Y
print("self.Y : ", self.Y)
...
def train_neural_network(self, data, targets):
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(self.epochs):
epoch_loss = 0
i = 0
# Let's train it in batch-mode
while i < len(data):
start = i
end = i + self.batch_size
batch_x = np.array(data[start:end])
print("type batch_x :", type(batch_x))
print("len batch_x :", len(batch_x))
batch_y = np.array(targets[start:end])
print("type batch_y :", type(batch_y))
print("len batch_y :", len(batch_y))
hidden, _, c = sess.run([self.encoded, self.optimizer, self.cost], feed_dict={self.X: batch_x, self.Y: batch_y})
epoch_loss +=c
i += self.batch_size
self.saver.save(sess, 'selfautoencoder.ckpt')
print('Accuracy', self.accuracy.eval({self.X: data, self.Y: targets}))
在这里我创建了输入数据,在下面你可以看到我将打印出他们的主要功能以供参考(注意我实际上只对第3列感兴趣):
features_DeepAE = create_feature_sets(filename)
Train_x = np.array(features_DeepAE[0])
Train_y = np.array(features_DeepAE[1])
print("type Train_x : ", type(Train_x))
print("type Train_x.T[3] : ", type(Train_x.T[3]))
print("len Train_x : ", len(Train_x))
print("len Train_x.T[3] : ", len(Train_x.T[3]))
print("shape Train_x : ", Train_x.shape)
print("type Train_y : ", type(Train_y))
print("type Train_y.T[3] : ", type(Train_y.T[3]))
print("len Train_y : ", len(Train_y))
print("len Train_y.T[3] : ", len(Train_y.T[3]))
print("shape Train_y : ", Train_y.shape)
在这里我运行代码:
DAE = Deep_Autoencoder(input_dim = len(Train_x))
DAE.train_neural_network(Train_x.T[3], Train_y.T[3])
这些是打印输出,fyi:
type Train_x : <class 'numpy.ndarray'>
type Train_x.T[3] : <class 'numpy.ndarray'>
len Train_x : 3433
len Train_x.T[3] : 3433
shape Train_x : (3433, 5)
type Train_y : <class 'numpy.ndarray'>
type Train_y.T[3] : <class 'numpy.ndarray'>
len Train_y : 3433
len Train_y.T[3] : 3433
shape Train_y : (3433, 5)
self.X : Tensor("Placeholder_142:0", shape=(?, 3433), dtype=float32)
self.Y : Tensor("Placeholder_143:0", shape=(?, 3433), dtype=float32)
type batch_x : <class 'numpy.ndarray'>
len batch_x : 128
type batch_y : <class 'numpy.ndarray'>
len batch_y : 128
最后是错误:
ValueError:无法为Tensor'Plamber_142:0'提供形状值(128,),其形状为'(?,3433)'
并且是的...我在占位符#143 ...测量了很多失败(重塑批次和/或张量,转换一个和/或另一个,在互联网上寻找变通方法......) ! 如果需要,请随时提出更多信息。
答案 0 :(得分:0)
解决了,感谢Avishkar Bhoopchand和amirbar:
将input_dim设置为1并为batch_x和batch_y添加“虚拟”维度,使其适合形状[?,1]的占位符,如下所示:batch_x = np.array(data [start:end]) [:,None]和batch_y = np.array(targets [start:end])[:,None]。 None为Numpy数组添加空维度。