如何解决此错误:numpy.ndarray“对象没有属性” append“

时间:2019-06-29 04:44:09

标签: python python-3.x numpy

我希望你一切都好。我试图运行以下代码。我收到此错误“ numpy.ndarray对象没有附加属性”。我尝试使用其他问题(例如numpy.append()numpy.concatenate())中推荐的解决方案,但无法解决问题。

from keras.applications import VGG16
from keras.applications import imagenet_utils
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
from sklearn.preprocessing import LabelEncoder
from hdf5datasetwriter import HDF5DatasetWriter
from imutils import paths
import progressbar
import argparse
import random
import numpy as np
import os


# construct the argument parser and parse the arguments

ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", required= True,
                help=" path to the input dataset ")
ap.add_argument("-o", "--output", required= True,
                help=" path to output HDF5 file ")
ap.add_argument("-b","--batch_size", type= int, default=32,
                help =" batch size of images to be passed through network ")
ap.add_argument("-s","--buffer_size", type =int, default=1000,
                help=" size of feature extraction buffer")

args= vars(ap.parse_args())

# store the batch size in a convenience variable
bs = args["batch_size"]

# grab the list of images that we will be describing then randomly shuffle them to
# allow for easy training and testing splits via array slicing during training time

print ("[INFO] loading images ...")
imagePaths= list(paths.list_images(args["dataset"]))
random.shuffle(imagePaths)

# extract the class labels from the images paths then encode the labels

labels = [p.split(os.path.sep)[-2] for p in imagePaths]
le= LabelEncoder()
labels= le.fit_transform(labels)

# load the VGG16 network

print("[INFO] loading network ...")

model= VGG16(weights="imagenet", include_top=False)

# initialize the HDF5 dataset writer then store the class label names in the
# dataset
dataset = HDF5DatasetWriter((len(imagePaths), 512*7*7), args["output"], dataKey="features",
                            bufSize= args["buffer_size"])
dataset.storeClassLabels(le.classes_)

# initialize the prograss bar
widgets = [" extracting features:", progressbar.Percentage(), " " , progressbar.Bar(),
           " " , progressbar.ETA()]
pbar= progressbar.ProgressBar(maxval=len(imagePaths), widgets= widgets ).start()

# loop over the image patches

for i in np.arange(0, len(imagePaths),bs):
    # extract the batch of images and labels, then initalize the
    # list of actualimages that will be passed through the network for feature
    # extraction

    batchPaths= imagePaths[i:i + bs]
    batchLabels = labels[i:i+bs]
    batchImages = []

    for (j, imagePath) in enumerate(batchPaths):
        # load the input image using the keras helper utility
        # while ensuring the image is resized to 224x224 pixels

        image = load_img(imagePath, target_size = (224,224))
        image = img_to_array(image)

        # preprocess the image by (1) expanding the dimensions and
        # (2) substracting the mean RGB pixel intensity from the imagenet dataset

        image = np.expand_dims(image, axis =0)
        #image = imagenet_utils.preprocess_input(image)

        # add the image to the batch
        batchImages.append(image)

        # pass the images through the network and use the outputs as our
        # actual featues

        batchImages = np.vstack(batchImages)
        features = model.predict(batchImages, batch_size = bs)

        # reshape the features so that each image is represented by a flattened feature vector of the maxPooling2D outputs
        features = features.reshape((features.shape[0], 512*7*7))
        # add the features and the labels to HDF5 dataset
        dataset.add(features, batchLabels)
        pbar.update(i)


dataset.close()
pbar.finish()

我得到了

enter image description here

我希望您能帮助我解决此问题。预先感谢所有

3 个答案:

答案 0 :(得分:0)

Numpy数组实例不具有追加功能。 致电

ffmpeg -i input.mp4 -i 1.jpg -f lavfi -t 5 -i color=#000000:s=1280x720 -filter_complex "[0][1]overlay=main_w-overlay_w:main_h-overlay_h,drawtext=text="text":fontcolor=#FFFFFF:fontsize=18:x=(w-text_w)/2:y=(h-text_h)-145[video];[2]drawtext=text="FINAL_VIDEO_TEST":fontcolor=#FFFFFF:fontsize=50:x=(w-text_w)/2:y=(h-text_h)-145[textOverlay];[video][textOverlay]concat=n=2:v=1:a=0" -preset ultrafast -codec:a copy mvm/testOut.mp4

是类函数。

答案 1 :(得分:0)

来自documentation

您应该执行类似numpy.append(your_arr, value_to_append) 的操作,因为“ append”实际上不是numpy数组上的已定义函数,这就是错误消息的意思。如果要在数组中的特定位置插入,batchImages = np.append(batchImages, image)也可以工作。

答案 2 :(得分:0)

您从

开始
batchImages = []

然后成功添加到列表中

batchImages.append(image)

然后在同一迭代中,创建一个数组并将其分配给同一变量:

batchImages = np.vstack(batchImages)

下一次迭代,batchImages不再是列表,因此append不起作用!

我想知道vstack是否有错误的缩进。是应该在j迭代中还是在i迭代中发生?

忽略建议使用np.append。不应迭代使用它,并且很难正确使用。它只是concatenate的原始掩盖函数。 vstack更好。