tf.convert_to_tensor(pred_labels)-ValueError:参数必须为密集张量:形状为[2,436,1024,2],但需要[2]

时间:2019-06-21 14:59:00

标签: python tensorflow

我正在尝试将现有光学流网络的输出从numpy数组转换回张量,以便可以通过可微分的插值网络运行它。

PWC-Net代码获取两个相同大小的输入图像并计算流对应关系。如果这是一对图像,我认为流程将是x和y的像素位移。尺寸为[1,h,W,2]。 但是,一批中可以有不同数量的图像对。因此我们可以将其称为b来表示批量大小,因此4D体积将变为[b,h,w,2]。

https://github.com/philferriere/tfoptflow/blob/master/tfoptflow/pwcnet_predict_from_img_pairs.py

通过使用

到张量

pred_labels_tensor = tf.convert_to_tensor(pred_labels)

我已经阅读了这些

Convert Python sequence to NumPy array, filling missing values

How to input a list of lists with different sizes in tf.data.Dataset

但是我仍然不明白我需要做些什么才能使它工作。

我还研究了该文件运行的代码,它确实使用了np.asarray。

这两个链接使我认为这与列表列表或可能需要一些零有关。我该如何找出问题所在?而我该如何解决呢?

从此python文件中:- https://github.com/philferriere/tfoptflow/blob/master/tfoptflow/pwcnet_predict_from_img_pairs.py

要重现此问题,您可以使用github下载中提供的现有示例,并使用此代码代替for循环。

image_path1 = f'./samples/mpisintel_test_clean_ambush_1_frame_0001.png'
image_path2 = f'./samples/mpisintel_test_clean_ambush_1_frame_0002.png'
image1, image2 = imread(image_path1), imread(image_path2)
img_pairs.append((image1, image2))

image_path1 = f'./samples/mpisintel_test_clean_ambush_1_frame_0003.png'
image_path2 = f'./samples/mpisintel_test_clean_ambush_1_frame_0004.png'
image1, image2 = imread(image_path1), imread(image_path2)
img_pairs.append((image1, image2))

pred_labels = nn.predict_from_img_pairs(img_pairs, batch_size=1, verbose=False)

pred_labels_tensor = tf.convert_to_tensor(pred_labels)

我期望得到张量输出,但是,我在MSVSCode的终端中收到此错误:-

ValueError: Argument must be a dense tensor: [array([[[ 0.32990038, -0.11566047],
        [ 0.35661912, -0.09227534],
        [ 0.38333783, -0.06889021],
        ...,
        [-0.1237613 ,  0.07946336],
        [-0.1237613 ,  0.07946336],
        [-0.1237613 ,  0.07946336]],

       [[ 0.34405386, -0.09286585],
        [ 0.36766803, -0.07679807],
        [ 0.39128217, -0.06073029],
        ...,
        [-0.10938472,  0.08551764],
        [-0.10938472,  0.08551764],
        [-0.10938472,  0.08551764]],

       [[ 0.35820735, -0.07007124],
        [ 0.37871695, -0.0613208 ],
        [ 0.39922655, -0.05257037],
        ...,
        [-0.09500815,  0.09157193],
        [-0.09500815,  0.09157193],
        [-0.09500815,  0.09157193]],

       ...,

       [[ 0.9003515 ,  1.0893728 ],
        [ 0.93065804,  1.0662789 ],
        [ 0.96096456,  1.0431851 ],
        ...,
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378]],

       [[ 0.9003515 ,  1.0893728 ],
        [ 0.93065804,  1.0662789 ],
        [ 0.96096456,  1.0431851 ],
        ...,
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378]],

       [[ 0.9003515 ,  1.0893728 ],
        [ 0.93065804,  1.0662789 ],
        [ 0.96096456,  1.0431851 ],
        ...,
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378]]], dtype=float32), array([[[ 0.49922907,  0.08599953],
        [ 0.5034714 ,  0.1123561 ],
        [ 0.5077137 ,  0.13871266],
        ...,
        [-0.3719127 ,  0.1080336 ],
        [-0.3719127 ,  0.1080336 ],
        [-0.3719127 ,  0.1080336 ]],

       [[ 0.49763823,  0.11536887],
        [ 0.4972613 ,  0.13717887],
        [ 0.49688435,  0.15898886],
        ...,
        [-0.36932352,  0.11556612],
        [-0.36932352,  0.11556612],
        [-0.36932352,  0.11556612]],

       [[ 0.49604735,  0.14473821],
        [ 0.4910512 ,  0.16200164],
        [ 0.48605505,  0.17926508],
        ...,
        [-0.36673436,  0.12309864],
        [-0.36673436,  0.12309864],
        [-0.36673436,  0.12309864]],

       ...,

       [[ 0.46260613, -0.47470346],
        [ 0.46841043, -0.46383834],
        [ 0.47421476, -0.4529732 ],
        ...,
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ]],

       [[ 0.46260613, -0.47470346],
        [ 0.46841043, -0.46383834],
        [ 0.47421476, -0.4529732 ],
        ...,
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ]],

       [[ 0.46260613, -0.47470346],
        [ 0.46841043, -0.46383834],
        [ 0.47421476, -0.4529732 ],
        ...,
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ]]], dtype=float32)] - [2, 436, 1024, 2], but wanted [2]

如果我将批处理大小减小到一个,则会出现此错误

ValueError:参数必须为密集的张量:-形状为[1,436,1024,2],但需要[1]。

要获得最小的可复制示例,它需要满足以下条件:-

Python 3.7.3 Tensorflow 1.13.1(最新稳定版) 另外,您需要下载并复制以下脚本并将其粘贴到现有脚本上

还有一个模型可以从https://drive.google.com/drive/folders/1iRJ8SFF6fyoICShRVWuzr192m_DgzSYp下载pwcnet-lg-6-2-multisteps-chairsthingsmix

"""
pwcnet_predict_from_img_pairs.py
Run inference on a list of images pairs.
Written by Phil Ferriere
Licensed under the MIT License (see LICENSE for details)
"""
from __future__ import absolute_import, division, print_function

from voxel_flow_geo_layer_utils import bilinear_interp
from voxel_flow_geo_layer_utils import meshgrid

from copy import deepcopy
from skimage.io import imread
from model_pwcnet import ModelPWCNet, _DEFAULT_PWCNET_TEST_OPTIONS
#from visualize import display_img_pairs_w_flows
import visualize
import numpy as np
import tensorflow as tf

# TODO: Set device to use for inference
# Here, we're using a GPU (use '/device:CPU:0' to run inference on the CPU)
gpu_devices = ['/device:GPU:0']  
controller = '/device:GPU:0'

# TODO: Set the path to the trained model (make sure you've downloaded it first https://drive.google.com/drive/folders/1iRJ8SFF6fyoICShRVWuzr192m_DgzSYp)
ckpt_path = './models/pwcnet-lg-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-595000'

# Build a list of image pairs to process (in this case it's just one image pair)
img_pairs = []
image_path1 = f'./samples/mpisintel_test_clean_ambush_1_frame_0001.png'
image_path2 = f'./samples/mpisintel_test_clean_ambush_1_frame_0002.png'
image1, image2 = imread(image_path1), imread(image_path2)
img_pairs.append((image1, image2))

# Configure the model for inference, starting with the default options
nn_opts = deepcopy(_DEFAULT_PWCNET_TEST_OPTIONS)
nn_opts['verbose'] = True
nn_opts['ckpt_path'] = ckpt_path
nn_opts['batch_size'] = 1
nn_opts['gpu_devices'] = gpu_devices
nn_opts['controller'] = controller

# We're running the PWC-Net-large model in quarter-resolution mode
# That is, with a 6 level pyramid, and upsampling of level 2 by 4 in each dimension as the final flow prediction
nn_opts['use_dense_cx'] = True
nn_opts['use_res_cx'] = True
nn_opts['pyr_lvls'] = 6
nn_opts['flow_pred_lvl'] = 2

# The size of the images in this dataset are not multiples of 64, while the model generates flows padded to multiples
# of 64. Hence, we need to crop the predicted flows to their original size
nn_opts['adapt_info'] = (1, 8, 8, 2)

# Instantiate the model in inference mode and display the model configuration
nn = ModelPWCNet(mode='test', options=nn_opts)
nn.print_config()

# Generate the predictions and display them
pred_labels = nn.predict_from_img_pair # pred_labels shape is [436, 1024,2]
# array has len 1 when there is only one image pair.
pred_labels_tensor = tf.convert_to_tensor(pred_labels)

0 个答案:

没有答案