我正在寻找一种加速文件加载的方法:
数据包含大约1百万行,使用“\ t”(制表符号)和utf8编码分隔的制表符,使用下面的代码解析整个文件大约需要9秒。但是,我希望几乎在一秒钟内完成!
def load(filename):
features = []
with codecs.open(filename, 'rb', 'utf-8') as f:
previous = ""
for n, s in enumerate(f):
splitted = tuple(s.rstrip().split("\t"))
if len(splitted) != 2:
sys.exit("wrong format!")
if previous >= splitted:
sys.exit("unordered feature")
previous = splitted
features.append(splitted)
return features
我想知道是否有任何二进制格式数据可以加快某些速度?或者,如果我可以从一些NumPy
或任何其他库中受益,以获得更快的加载速度。
也许你可以就另一个速度瓶颈给我建议吗?
编辑:所以我尝试了一些你的想法,谢谢! BTW我真的需要巨大列表中的元组(字符串,字符串)...这里是结果,我获得了50%的时间:)现在我要照看NumPy二进制数据,因为我注意到了另一个巨大的文件真的很快加载...
import codecs
def load0(filename):
with codecs.open(filename, 'rb', 'utf-8') as f:
return f.readlines()
def load1(filename):
with codecs.open(filename, 'rb', 'utf-8') as f:
return [tuple(x.rstrip().split("\t")) for x in f.readlines()]
def load3(filename):
features = []
with codecs.open(filename, 'rb', 'utf-8') as f:
for n, s in enumerate(f):
splitted = tuple(s.rstrip().split("\t"))
features.append(splitted)
return features
def load4(filename):
with codecs.open(filename, 'rb', 'utf-8') as f:
for s in f:
yield tuple(s.rstrip().split("\t"))
a = datetime.datetime.now()
r0 = load0(myfile)
b = datetime.datetime.now()
print "f.readlines(): %s" % (b-a)
a = datetime.datetime.now()
r1 = load1(myfile)
b = datetime.datetime.now()
print """[tuple(x.rstrip().split("\\t")) for x in f.readlines()]: %s""" % (b-a)
a = datetime.datetime.now()
r3 = load3(myfile)
b = datetime.datetime.now()
print """load3: %s""" % (b-a)
if r1 == r3: print "OK: speeded and similars!"
a = datetime.datetime.now()
r4 = [x for x in load4(myfile)]
b = datetime.datetime.now()
print """load4: %s""" % (b-a)
if r4 == r3: print "OK: speeded and similars!"
结果:
f.readlines(): 0:00:00.208000
[tuple(x.rstrip().split("\t")) for x in f.readlines()]: 0:00:02.310000
load3: 0:00:07.883000
OK: speeded and similars!
load4: 0:00:07.943000
OK: speeded and similars!
一些非常奇怪的事情是我注意到我连续两次跑步几乎可以加倍(但不是每次都有):
>>> ================================ RESTART ================================
>>>
f.readlines(): 0:00:00.220000
[tuple(x.rstrip().split("\t")) for x in f.readlines()]: 0:00:02.479000
load3: 0:00:08.288000
OK: speeded and similars!
>>> ================================ RESTART ================================
>>>
f.readlines(): 0:00:00.279000
[tuple(x.rstrip().split("\t")) for x in f.readlines()]: 0:00:04.983000
load3: 0:00:10.404000
OK: speeded and similars!
EDIT LATEST:我试图修改使用numpy.load
...这对我来说很奇怪...来自“普通”文件和我的1022860字符串,以及10 KB。
做了numpy.save(numpy.array(load1(myfile)))
后我去了895 MB!然后用numpy.load()
重新加载这个我在连续运行中得到这种时间:
>>> ================================ RESTART ================================
loading: 0:00:11.422000 done.
>>> ================================ RESTART ================================
loading: 0:00:00.759000 done.
可能会numpy做一些内存来避免将来重新加载吗?
答案 0 :(得分:2)
试试这个版本,因为你提到检查并不重要我已经将其删除了。
def load(filename):
with codecs.open(filename, 'rb', 'utf-8') as f:
for s in f:
yield tuple(s.rstrip().split("\t"))
results = [x for x in load('somebigfile.txt')]
答案 1 :(得分:1)
检查实际读取文件行的秒数,例如
def load(filename):
features = []
with codecs.open(filename, 'rb', 'utf-8') as f:
return f.readlines()
如果它明显少于9秒,那么
并查看是否有任何加速
答案 2 :(得分:1)
检查了遍历文件需要多长时间,如bpgergo
所示,您可以查看以下内容:
features = [None] * (10 ** 6)
初始化您的列表split()
的结果转换为元组,似乎没必要。enumerate
中受益。只需使用:for line in f:
代替for n, s in enumerate(f):