重塑的输入是一个具有89401值的张量,但请求的形状为268203

时间:2018-01-12 03:23:41

标签: python-2.7 tensorflow tensorflow-slim

在重新训练Name: Flask Version: 0.11.1 Summary: A microframework based on Werkzeug, Jinja2 and good intentions Home-page: http://github.com/pallets/flask/ Author: Armin Ronacher Author-email: armin.ronacher@active-4.com License: BSD Location: /home/myname/pgadmin4/lib/python2.7/site-packages Requires: itsdangerous, click, Werkzeug, Jinja2 时,我创建inception_v3 network文件,其中包含形状[299,299]的图像(均为jpeg编码),用于训练数据。我只得到TFrecorde结果,然后我得到如下错误。

  

tensorflow.python.framework.errors_impl.InvalidArgumentError:重塑的输入是一个具有89401值的张量,但请求的形状有268203

     

[[Node:Reshape = Reshape [T = DT_UINT8,Tshape = DT_INT32,_device =“/ job:localhost / replica:0 / task:0 / device:CPU:0”](DecodeRaw,Reshape / shape)] ]

89401 = 299x299,268023 = 299x299x3。我的创建TFrecorde代码是:

step=0

此外,我的班级每班6张,32张。 batch_size = 32.我在训练前打印图像矩阵大小为'2680203'。输出日志是:

import os
import tensorflow as tf
from PIL import Image 
import matplotlib.pyplot as plt
import numpy as np

cwd = '/home/xzy/input_data/testnet/images/'
tfrecord_dir = '/home/xzy/input_data/testnet/'
width, height = 299, 299

def create_tfrecord(file_path):
    classes = {'boat', 'junk', 'carrier', 'warship', 'raft', 'speedboat'} 
    writer = tf.python_io.TFRecordWriter(tfrecord_dir + 'train.tfrecords')  

    for index, name in enumerate(classes):
        class_path = file_path + name + '/'
        for img_name in os.listdir(class_path):
            img_path = class_path + img_name  

            img = Image.open(img_path)
            img = img.resize((width, height))
            img_raw = img.tobytes() 
            example = tf.train.Example(features=tf.train.Features(feature={
                "label": tf.train.Feature(int64_list=tf.train.Int64List(value=[index])),
                'img_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[img_raw]))
            }))  
            writer.write(example.SerializeToString()) 

    writer.close()

def read_record(path):
    filename_queue = tf.train.string_input_producer([path])
    reader = tf.TFRecordReader()
    _, serialized_example = reader.read(filename_queue)
    features = tf.parse_single_example(serialized_example,
                                       features={
                                           'label': tf.FixedLenFeature([], tf.int64),
                                           'img_raw': tf.FixedLenFeature([], tf.string),
                                       })
    image = tf.decode_raw(features['img_raw'], tf.uint8)
    image = tf.reshape(image, [299, 299, 3])
    label = tf.cast(features['label'], tf.int32)
    image_batch, label_batch = tf.train.batch([image, label],
                                              batch_size=32, num_threads=4, capacity=300)
    label_batch = tf.one_hot(label_batch, depth=6)
    label_batch = tf.cast(label_batch, dtype=tf.int32)
    label_batch = tf.reshape(label_batch, [32, 6])
    with tf.Session() as sess:
        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(coord=coord)
        try:
            imgs, labs = sess.run([image_batch, label_batch])
            imgs = tf.to_float(imgs)
            init_op = tf.global_variables_initializer()
            sess.run(init_op)
        except tf.errors.OutOfRangeError:
            print('Done training -- epoch limit reached')
        coord.request_stop()
        coord.join(threads)
        array = imgs.eval()
        print("##########################################")
        for i in range(32):
            ar = array[i].flatten()
            print(len(ar))
        print('#######################################')
        sess.close()
    return imgs, labs

为什么我只获得########################################## i=0, len(ar)=268203 i=1, len(ar)=268203 i=2, len(ar)=268203 i=3, len(ar)=268203 i=4, len(ar)=268203 i=5, len(ar)=268203 i=6, len(ar)=268203 i=7, len(ar)=268203 i=8, len(ar)=268203 i=9, len(ar)=268203 i=10, len(ar)=268203 i=11, len(ar)=268203 i=12, len(ar)=268203 i=13, len(ar)=268203 i=14, len(ar)=268203 i=15, len(ar)=268203 i=16, len(ar)=268203 i=17, len(ar)=268203 i=18, len(ar)=268203 i=19, len(ar)=268203 i=20, len(ar)=268203 i=21, len(ar)=268203 i=22, len(ar)=268203 i=23, len(ar)=268203 i=24, len(ar)=268203 i=25, len(ar)=268203 i=26, len(ar)=268203 i=27, len(ar)=268203 i=28, len(ar)=268203 i=29, len(ar)=268203 i=30, len(ar)=268203 i=31, len(ar)=268203 ####################################### 结果。然后我得到了错误。堆栈跟踪是:

step=0

为什么呢?能不能给我一些想法,谢谢?

1 个答案:

答案 0 :(得分:0)

这是我的错。当我粘贴跟踪日志时,我粘贴另一个错误日志。幸运的是,我修正了错误。

  

重塑的输入是一个具有89401值的张量,但请求的形状有268203

  1. 删除我的火车数据的黑色图像
  2. 误导tensornumpy,或误用numpytensor,找到它,更正。
  3.   

    您是不是要在VarScope中设置reuse = True或reuse = tf.AUTO_REUSE?

    如果reuse=Flase,则会获得step=0结果。然后重量&偏向存在下一步,因此得到错误Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope?

    如果reuse=True,您将获得uninitialized wight&bias error,因为第一步不存在权利和偏见

    当我设置resue=tf.AUTO_REUSE时,我成功了:

    logits, end_points = v3.inception_v3(image, num_classes, 
                                             is_training=True, reuse=tf.AUTO_REUSE)