运行SSD测试代码

时间:2017-10-16 10:27:45

标签: python-3.x deep-learning object-detection

我尝试在win10中使用spyder运行tensorflow/ssd_tests代码(python 3.5 tensorflow 1.1)。

我收到的错误如下:

    File "D:\software\anaconda\envs\tensorflow\lib\site-packages\zmq\utils\jsonapi.py", line 43, in dumps
    s = s.encode('utf8') 
    UnicodeEncodeError: 'utf-8' codec can't encode character '\udcd5' in position 2896: surrogates not allowed`  

我一步一步地运行代码,并且在运行代码provider = slim.dataset_data_provider.DatasetDataProvider()时发现错误发生了 在文件 dataset_data_provider 中,函数DatasetDataProvider()in, code键,data = parallel_reader.parallel_read()from来自tensorflow.contrib.slim.python.slim.data import parallel_reader`将导致错误。

parallel_read()的功能是:

    def parallel_read(data_sources,
              reader_class,
              num_epochs=None,
              num_readers=4,
              reader_kwargs=None,
              shuffle=True,
              dtypes=None,
              capacity=256,
              min_after_dequeue=128,
              seed=None,
              scope=None):
    data_files = get_data_files(data_sources)
    with ops.name_scope(scope, 'parallel_read'):
      filename_queue = tf_input.string_input_producer(
    data_files, num_epochs=num_epochs, shuffle=shuffle, seed=seed,
    name='filenames')
      dtypes = dtypes or [tf_dtypes.string, tf_dtypes.string]
      if shuffle:
        common_queue = data_flow_ops.RandomShuffleQueue(
      capacity=capacity,
      min_after_dequeue=min_after_dequeue,
      dtypes=dtypes,
      seed=seed,
      name='common_queue')
      else:
        common_queue = data_flow_ops.FIFOQueue(
      capacity=capacity, dtypes=dtypes, name='common_queue')
      summary.scalar('fraction_of_%d_full' % capacity,
               math_ops.to_float(common_queue.size()) * (1. / capacity))
      return ParallelReader(
    reader_class,
    common_queue,
    num_readers=num_readers,
    reader_kwargs=reader_kwargs).read(filename_queue)

dtype 是导致此错误的原因吗? 我怎么能摆脱这个呢? 谢谢!

我找到了原因。正确加载模型后,错误消失。

0 个答案:

没有答案