在Python中堆叠多维数组的数组

时间:2018-04-16 21:39:28

标签: python numpy tensorflow keras

我无法真正理解这一点......而且我不确定堆叠是否适合在这里使用。

A.shape = (28,28,1)
B.shape = (28,28,1)

如果我想合并/添加/堆叠这些数组:

C.shape = (2,28,28,1)

我该怎么做?这是+=版本,我可以将新形状(28,28,1)数组添加到现有堆栈中以获取(3,28,28,1)

修改

我有100个灰度图像的数组:(100, 784)我想我可以使用(100,28,28,1)重新塑造到tf.reshape

我想用tf.image.per_image_standardizationdoc)标准化100个图像的所有像素值,但此函数仅接受输入形状(h,w,ch)(28,28,1)

有关如何优化此项的任何建议吗?

CODE

for i in range(epochs):
    for j in range(samples/batch_size):

        batch_xs, batch_ys = mnist.train.next_batch(batch_size) #(100,784)
        batch_xsr = tf.reshape(batch_xs, [-1, 28, 28, 1]) # (100,28,28,1)

        ... 

        #somehow use tf.image.per_image_standardization (input shape = 
        #(28,28,1)) on each of the 100 images, and end up with 
        #shape (100,28,28,1) again.

        ...

        _, loss = sess.run([train, loss_op], feed_dict={x: batch_xs, y: batch_ys})

自我注意:TensorFlow在feed dict中需要np.array。

2 个答案:

答案 0 :(得分:3)

你可以这样......

import numpy as np

A = np.zeros(shape=(28, 28, 1))
B = np.zeros(shape=(28, 28, 1))
A.shape  # (28, 28, 1)
B.shape  # (28, 28, 1)

C = np.array([A, B])

C.shape  # (2, 28, 28, 1)

然后使用它来添加更多,假设' new'这里的形状与A或B相同。

def add_another(C, new):
    return np.array(list(C) + [new])

答案 1 :(得分:2)

您可以使用numpy的函数stackconcatenate

import numpy as np

A = np.zeros((28, 28, 1))
B = np.zeros((28, 28, 1))

C = np.stack((A, B), axis=0)

print (C.shape)

>>> (2L, 28L, 28L, 1L)

通过连接(28, 28, 1),将更多形状(x, 28, 28, 1)的数组附加到形状axis=0的数组中:

D = np.ones((28,28,1))
C = np.concatenate([C, [D]], axis=0)
#C = np.append(C, [D], axis=0)  # equivalent using np.append which is wrapper around np.concatenate

print (C.shape)

>>> (3L, 28L, 28L, 1L)

修改

我不熟悉张量流,但请尝试将其标准化

for i in range(epochs):
    for j in range(samples/batch_size):

        batch_xs, batch_ys = mnist.train.next_batch(batch_size) #(100,784)
        batch_xsr = tf.reshape(batch_xs, [-1, 28, 28, 1]) # (100,28,28,1)

        for i_image in range(batch_xsr.shape[0]):
            batch_xsr[i_image,:,:,:] = tf.image.per_image_standardization(batch_xsr[i_image,:,:,:])

        _, loss = sess.run([train, loss_op], feed_dict={x: batch_xs, y: batch_ys})