使用memoization来优化JSON转换的速度?

时间:2015-10-07 00:17:01

标签: python json performance optimization

我有一个问题要按特定顺序转换对象列表。我从一些非常慢的东西开始,然后使用地图再次优化它。然而,采访者告诉我,使用记忆可以更快地完成这项工作。我不知道该怎么做。

这是我写的代码片段:

import json
import uuid
import random
import time

categories = ['infantry', 'machine-gun', 'rocket-man'] # this could go to a really long list

class Soilder(object):
    def __init__(self, id, name, category):
        self.id = id
        self.name = name
        self.category = category


number_of_soilder = 10000 # this could be really large

soilders = []
for i in range(0, number_of_soilder):
    name = str(uuid.uuid4().get_hex().upper()[0:6])
    soilders.append(Soilder(i, name, random.choice(categories)))

start = time.time()
result = []
for c in categories:
    s_list = []
    for s in soilders:
        if s.category == c:
            s_list.append(s)

    result.extend(s_list)
    s_list = []

end = time.time()
print(end - start)

two_loop = json.dumps([r.__dict__ for r in result])

start = time.time()
map_of_soilders = {}
for c in categories:
    map_of_soilders[c] = []

for s in soilders:
    cat = s.category
    map_of_soilders.get(cat).append(s)
end = time.time()

result = []
for c in categories:
    result.extend(map_of_soilders.get(c))

print(end - start)

map_approach = json.dumps([r.__dict__ for r in result])
print(two_loop == map_approach)

使用地图方法的结果快两倍。

0.00654196739197
0.00372314453125

如何让它比地图方法更快?

0 个答案:

没有答案