我正在使用Scipy的KDTree实现来读取300 MB的大文件。现在,有没有办法可以将数据结构保存到磁盘并再次加载它,还是我每次启动程序时都会从文件中读取原始点并构建数据结构?我正在构建KDTree,如下所示:
def buildKDTree(self):
self.kdpoints = numpy.fromfile("All", sep=' ')
self.kdpoints.shape = self.kdpoints.size / self.NDIM, NDIM
self.kdtree = KDTree(self.kdpoints, leafsize = self.kdpoints.shape[0]+1)
print "Preparing KDTree... Ready!"
有什么建议吗?
答案 0 :(得分:10)
KDtree使用嵌套类来定义其节点类型(innernode,leafnode)。 Pickle仅适用于模块级类定义,因此嵌套类将其启动:
import cPickle
class Foo(object):
class Bar(object):
pass
obj = Foo.Bar()
print obj.__class__
cPickle.dumps(obj)
<class '__main__.Bar'>
cPickle.PicklingError: Can't pickle <class '__main__.Bar'>: attribute lookup __main__.Bar failed
然而,有一个(hacky)解决方法是将类定义修改到模块范围内的scipy.spatial.kdtree
,以便pickler可以找到它们。如果所有读取和写入pickle KDtree对象的代码都安装了这些补丁,那么这个hack应该可以正常工作:
import cPickle
import numpy
from scipy.spatial import kdtree
# patch module-level attribute to enable pickle to work
kdtree.node = kdtree.KDTree.node
kdtree.leafnode = kdtree.KDTree.leafnode
kdtree.innernode = kdtree.KDTree.innernode
x, y = numpy.mgrid[0:5, 2:8]
t1 = kdtree.KDTree(zip(x.ravel(), y.ravel()))
r1 = t1.query([3.4, 4.1])
raw = cPickle.dumps(t1)
# read in the pickled tree
t2 = cPickle.loads(raw)
r2 = t2.query([3.4, 4.1])
print t1.tree.__class__
print repr(raw)[:70]
print t1.data[r1[1]], t2.data[r2[1]]
输出:
<class 'scipy.spatial.kdtree.innernode'>
"ccopy_reg\n_reconstructor\np1\n(cscipy.spatial.kdtree\nKDTree\np2\nc_
[3 4] [3 4]