我需要知道是否有更快的方法来获取LBP和MNIST数据集的直方图。通过我尚未决定的模型,它将用于手写文本识别。
我已经加载了MNIST数据集,并根据tensorflow
教程将其分为x,y训练集和x,y测试集。
然后我用cv2
来反转图像。
从那里,我已经定义了一个使用skimage
的函数来获取LBP和输入图像的相应直方图
我最终使用经典的for
循环遍历图像,获取图像的直方图,将其存储在单独的列表中,然后返回新列表以及训练集和测试集的未更改标签列表。
这是加载MNIST数据集的功能:
def loadDataset():
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# should I invert it or not?
x_train = cv2.bitwise_not(x_train)
x_test = cv2.bitwise_not(x_test)
return (x_train, y_train), (x_test, y_test)
这是获取LBP和相应直方图的功能:
def getLocalBinaryPattern(img, points, radius):
lbp = feature.local_binary_pattern(img, points, radius, method="uniform")
hist, _ = np.histogram(lbp.ravel(),
bins=np.arange(0, points + 3),
range=(0, points + 2))
return lbp, hist
最后是迭代图像的功能:
def formatDataset(dataset):
(x_train, y_train), (x_test, y_test) = dataset
x_train_hst = []
for i in range(len(x_train)):
_, hst = getLocalBinaryPattern(x_train[i], 8, 1)
print("Computing LBP for training set: {}/{}".format(i, len(x_train)))
x_train_hst.append(hst)
print("Done computing LBP for training set!")
x_test_hst=[]
for i in range(len(x_test)):
_, hst = getLocalBinaryPattern(x_test[i], 8, 1)
print("Computing LBP for test set: {}/{}".format(i, len(x_test)))
x_test_hst.append(hst)
print("Done computing LBP for test set!")
print("Done!")
return (x_train_hst, y_train), (x_test_hst, y_test)
我知道它会很慢,的确确实很慢。因此,我在寻找更多加快速度的方法,或者是否已经有需要该信息的数据集版本。
答案 0 :(得分:1)
我认为没有一种直接的方法可以加快图像的迭代速度。可能有人希望使用NumPy的vectorize
或apply_along_axis
可以提高性能,但是这些解决方案实际上比for
循环(或列表理解)要慢。
遍历图像的不同选择:
def compr(imgs):
hists = [getLocalBinaryPattern(img, 8, 1)[1] for img in imgs]
return hists
def vect(imgs):
lbp81riu2 = lambda img: getLocalBinaryPattern(img, 8, 1)[1]
vec_lbp81riu2 = np.vectorize(lbp81riu2, signature='(m,n)->(k)')
hists = vec_lbp81riu2(imgs)
return hists
def app(imgs):
lbp81riu2 = lambda img: getLocalBinaryPattern(img.reshape(28, 28), 8, 1)[1]
pixels = np.reshape(imgs, (len(imgs), -1))
hists = np.apply_along_axis(lbp81riu2, 1, pixels)
return hists
结果:
In [112]: (x_train, y_train), (x_test, y_test) = loadDataset()
In [113]: %timeit -r 3 compr(x_train)
1 loop, best of 3: 14.2 s per loop
In [114]: %timeit -r 3 vect(x_train)
1 loop, best of 3: 17.1 s per loop
In [115]: %timeit -r 3 app(x_train)
1 loop, best of 3: 14.3 s per loop
In [116]: np.array_equal(compr(x_train), vect(x_train))
Out[116]: True
In [117]: np.array_equal(compr(x_train), app(x_train))
Out[117]: True