ValueError:所有输入数组必须具有相同数量的维度 - 对于Tensorflow手动MSINT

时间:2017-02-23 04:49:39

标签: python numpy tensorflow

我试图采用我在张量流中制作的随机梯度下降码并训练25112类似于MINST数据集的图像(文件看起来与它完全相同)。如果这是一个简单的问题我很抱歉,但我不确定如何继续。谢谢!

我遇到了这个错误:

" ValueError:所有输入数组必须具有相同数量的维度"

在这行代码中:

x = np.c_ [np.ones(n),image_tensor2]#line75

我无法确定为什么这不起作用 - 我认为这与我在图像文件中的阅读方式有关,但我无法确定。这是我的代码

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import argparse 

#load the images in order
vector = [] #initialize the vector

filenames = tf.train.match_filenames_once("train_data/*.jpg")
filename_queue = tf.train.string_input_producer(filenames)

image_reader = tf.WholeFileReader()
_, image_file = image_reader.read(filename_queue)
image_orig = tf.image.decode_jpeg(image_file)
image = tf.image.resize_images(image_orig, [28, 28])
image.set_shape((28, 28, 3))
images = tf.image.decode_jpeg(image_file)
with tf.Session() as sess:
    tf.global_variables_initializer().run()
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)
    image_tensor = sess.run([images]) 
    #print(image_tensor)
    #coord.request_stop()
    #coord.join(threads)

image_tensor2 = np.array(image_tensor)
n_samples = image_tensor2.shape[0]
lossHistory=[]
ap = argparse.ArgumentParser()
ap.add_argument("-b", "--batch-size", type = int, default =32, help = "size of SGD mini-batches")
args = vars(ap.parse_args())

  # Create the model
x = tf.placeholder(tf.float32, [None, 784]) #784=28*28
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x, W) + b

y_ = tf.placeholder(tf.float32, [25112, 10])


sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
  # Train

def next_batch(x, batchSize):
    for i in np.arange(0, x.shape[0], batchSize):
        yield (x[i:i + batchSize])

def gradient_descent_2(alpha, x, y, numIterations):
    m,n = (784, 25112) # number of samples
    theta = np.ones(n)
    theta.fill(0.01)
    x_transpose = x.transpose()
    losshistory=[]
    count = 0
    batchX = 50
    for (batchX) in next_batch(x, args["batch_size"]):
        for iter in range(0, numIterations):
            hypothesis = np.dot(x, theta)
            loss = hypothesis - y
            J = np.sum(loss ** 2) / (2 * m)  # cost
            lossHistory.append(J)
            print( "iter %s | J: %.3f" % (iter, J))      
            gradient = np.dot(x_transpose, loss) / m         
            theta = theta - alpha * gradient  
    return theta

if __name__ == '__main__':


    m, n = (784, 25112)
    x = np.c_[ np.ones(n), image_tensor2] # insert column
    alpha = 0.001 # learning rate
    theta = gradient_descent_2(alpha, image_tensor2, y_, 50)
    fig = plt.figure()
    print(theta)

1 个答案:

答案 0 :(得分:0)

我的直觉是image_tensor是3-D数组,而np.ones(n)正在创建1-D数组。

但你的目标是插入一个偏向列(我的猜测)一个快速的方法:

b = np.ones((n + n+1))
x = b[:, 1:] = image_tensor2

示例

c = 
array([[7, 5, 6, 8, 7],
       [5, 8, 7, 9, 7],
       [9, 5, 6, 5, 8]])

b = np.ones((3, 6))
d = b[:, 1:] =c

d =
array([[ 1.,  7.,  5.,  6.,  8.,  7.],
       [ 1.,  5.,  8.,  7.,  9.,  7.],
       [ 1.,  9.,  5.,  6.,  5.,  8.]])