我的内存对于我的数据来说太小了,所以我尝试将其打包在内存中。
以下代码确实有效,但我必须记住数据的类型,这种方式很多(很多不同的数据类型)。
有什么更好的建议吗?较小的运行时间也将受到赞赏
import numpy as np
import zlib
A = np.arange(10000)
dtype = A.dtype
B = zlib.compress(A, 1)
C = np.fromstring(zlib.decompress(B), dtype)
np.testing.assert_allclose(A, C)
答案 0 :(得分:8)
您可以尝试使用numpy的内置阵列压缩器np.savez_compressed()
。这将为您节省跟踪数据类型的麻烦,但可能会提供与您的方法类似的性能。这是一个例子:
import io
import numpy as np
A = np.arange(10000)
compressed_array = io.BytesIO() # np.savez_compressed() requires a file-like object to write to
np.savez_compressed(compressed_array, A)
# load it back
compressed_array.seek(0) # seek back to the beginning of the file-like object
decompressed_array = np.load(compressed_array)['arr_0']
>>> print(len(compressed_array.getvalue())) # compressed array size
15364
>>> assert A.dtype == decompressed_array.dtype
>>> assert all(A == decompressed_array)
请注意,任何尺寸缩减都取决于数据的分布。随机数据本质上是不可压缩的,因此尝试压缩它可能看不到多少好处。
答案 1 :(得分:2)
我想发布我的最终代码,万一它可以帮助任何人。它可以使用不同的包算法在RAM中压缩,或者,如果没有足够的RAM,则将数据存储在hdf5文件中。任何加速或建议更好的代码是值得赞赏的。
import zlib,bz2
import numpy as np
import h5py
import os
class packdataclass():
def __init__(self,packalg='nocompress',Filename=None):
self.packalg=packalg
if self.packalg=='hdf5_on_drive':
self.Filename=Filename
self.Running_Number=0
if os.path.isfile(Filename):
os.remove(Filename)
with h5py.File(self.Filename,'w') as hdf5_file:
hdf5_file.create_dataset("TMP_File", data="0")
def clean_up(self):
if self.packalg=='hdf5_on_drive':
if os.path.isfile(self.Filename):
os.remove(self.Filename)
def compress (self, array):
Returndict={'compression':self.packalg,'type':array.dtype}
if array.dtype==np.bool:
Returndict['len_bool_array']=len(array)
array=np.packbits(array.astype(np.uint8)) # Code converts 8 bool to an int8
Returndict['type']='bitfield'
if self.packalg == 'nocompress':
Returndict['data'] = array
elif self.packalg == 'zlib':
Returndict['data'] = zlib.compress(array,1)
elif self.packalg == 'bz2':
Returndict['data'] = bz2.compress(array,1)
elif self.packalg == 'hdf5_on_drive':
with h5py.File(self.Filename,'r+') as hdf5_file:
datatype=array.dtype
Returndict['data']=str(self.Running_Number)
hdf5_file.create_dataset(Returndict['data'], data=array, dtype=datatype, compression='gzip',compression_opts=4)
self.Running_Number+=1
else:
raise ValueError("Algorithm for packing {} is unknown".format(self.packalg))
return(Returndict)
def decompress (self, data):
if data['compression'] == 'nocompress':
data_decompressed=data['data']
else:
if data['compression'] == 'zlib':
data_decompressed = zlib.decompress(data['data'])
elif data['compression'] == 'bz2':
data_decompressed = bz2.decompress(data['data'])
elif data['compression'] == 'hdf5_on_drive':
with h5py.File(self.Filename, "r") as Readfile:
data_decompressed=np.array(Readfile[data['data']])
else:
raise
if type(data['type'])!=np.dtype and data['type']=='bitfield':
data_decompressed =np.fromstring(data_decompressed, np.uint8)
else:
data_decompressed =np.fromstring(data_decompressed, data['type'])
if type(data['type'])!=np.dtype and data['type']=='bitfield':
return np.unpackbits(data_decompressed).astype(np.bool)[:data['len_bool_array']]
else:
return(data_decompressed)
答案 2 :(得分:-1)
您可以尝试bcolz,我在谷歌搜索类似问题的答案时才发现它:https://bcolz.readthedocs.io/en/latest/intro.html
这是numpy数组之上的另一层,可以为您组织压缩。