我有一个图像像素数据的4维张量(红色(高度,宽度),绿色(高度,宽度),蓝色(高度,宽度),14000个示例)和一个CSV文件,其中包含边界框的坐标,每个图像都有(图像名称,X1,Y1,X2,Y2),它有14000行,每个示例也有一个。
如何将这些数据输入我的神经网络?目前,如果我尝试馈入张量,则它将14000个示例的整个数组传递给(X1,Y1,X2,Y2)的一行{它应该已经为x1,y1,x2,y2的一行传递了一个数组}。 / p>
有什么解决方法吗?
这是代码和相关的错误:
train_csv = pd.read_csv('datasets/training.csv').values
test_csv = pd.read_csv('datasets/test.csv').values
y_train = train_csv[:,[1,2,3,4]] #done
x_train_names = train_csv[:,0] #obtained names of images in array
#### load images into an array ####
X_train = []
path = "datasets/images/images/"
imagelist = listdir(path)
for i in range(len(x_train_names)):
img_name = x_train_names[i]
img = Image.open(path + str(img_name))
arr = array(img)
X_train.append(arr)
#### building a very basic classifier, just to get some result ####
classifier = Sequential()
classifier.add(Convolution2D(64,(3,3),input_shape=(64,64,3), activation =
'relu'))
classifier.add(Dropout(0.2))
classifier.add(MaxPooling2D((4,4)))
classifier.add(Convolution2D(32,(2,2), activation = 'relu'))
classifier.add(MaxPooling2D((2,2)))
classifier.add(Flatten())
classifier.add(Dense(16, activation = 'relu'))
classifier.add(Dropout(0.5))
classifier.add(Dense(4))
classifier.compile('adam','binary_crossentropy',['accuracy'])
classifier.fit(x=X_train,y=y_train, steps_per_epoch=80, batch_size=32,
epochs=25)
错误:
ValueError:检查模型输入时出错:传递给模型的Numpy数组列表不是模型预期的大小。预计会看到1个数组,但得到了以下14000个数组的列表:
[array([[[141, 154, 144],
[141, 154, 144],
[141, 154, 144],
...,
[149, 159, 150],
[150, 160, 151],
[150, 160, 151]],
[[140, 153, 143],
[…
编辑:我将所有图像都转换为灰度,因此没有出现内存错误。这意味着我的X_train沿通道数应具有1维(较早的RGB)。这是我编辑的代码:
y_train = train_csv[:,[1,2,3,4]] #done
x_train_names = train_csv[:,0] #obtained names of images in array
# load images into an array
path = "datasets/images/images/"
imagelist = listdir(path)
img_name = x_train_names[0]
X_train = np.ndarray((14000,img.height,img.width,1))
for i in range(len(x_train_names)):
img_name = x_train_names[i]
img = Image.open(path + str(img_name)).convert('L')
##converting image to grayscale because I get memory error else
X_train[i,:,:,:] = np.asarray(img)
ValueError:无法将输入数组从形状(480,640)广播到形状(480,640,1)
(在X_train[i,:,:,:] = np.asarray(img)
行)
答案 0 :(得分:0)
第一步始终是找出您的第一卷积层期望的输入形状。 http://apijob.test/api的文档指出4D输入张量的预期形状为path = "datasets/images/images/"
imagelist = listdir(path)
img_name = x_train_names[0]
img = Image.open(path + str(img_name))
X_train = np.ndarray((len(imagelist),img.height,img.width,3))
for i in range(len(x_train_names)):
img_name = x_train_names[i]
img = Image.open(path + str(img_name))
X_train[i,:,:,:] = np.asarray(img)
。
要加载数据,我们可以使用numpy ndarray。为此,我们应该知道要加载的图像数量以及图像尺寸:
print(X_train.shape)
> (len(x_train_names), img.height, img.width, 3)
您的X_train张量的shape属性将为您提供
#### Build and compile your classifier up here here ####
num_batches = 5
len_batch = np.floor(len(x_train_names)/num_batches).astype(int)
X_train = np.ndarray((len_batch,img.height,img.width,3))
for batch_idx in range(num_batches):
idx_start = batch_idx*len_batch
idx_end = (batch_idx+1)*len_batch-1
x_train_names_batch = x_train_names[idx_start:idx_end]
for i in range(len(x_train_names_batch)):
img_name = x_train_names_batch[i]
img = Image.open(path + str(img_name))
X_train[i,:,:,:] = np.asarray(img)
classifier.fit(x=X_train,y=y_train, steps_per_epoch=num_batches, batch_size=len(x_train_names_batch), epochs=2)
编辑:
要批量加载图像,您可以执行以下操作:
string str = "HelloWorld"
str.PadRight(11);