如何一次传递/处理100行或更低的try:
?
receipt_dict = {}
with open("data.txt", "r") as plain_text: // ** 10000+ lines **
for line in plain_text:
hash_value = line.strip()
receipt_dict[hash_value] = 1
try:
bitcoind.sendmany("", receipt_dict) // **here must loop 100 at a time**
答案 0 :(得分:1)
将其作为list
词典处理,跟踪每个词典的大小:
receipt_dicts = []
current_dict = {}
with open("data.txt", "r") as plain_text: # ** 10000+ lines **
for line in plain_text:
if len(current_dict) == 100:
receipt_dict.append(current_dict)
current_dict = {}
current_dict[line.strip()] = 1
receipt_dict.append(current_dict)
然后,您可以遍历此list
并一次处理一个字典。
答案 1 :(得分:1)
使用发电机。在此处,load_data_chunks
会在receipt_dict
中累积数据,直到其大小超过chunk_size
并将其返回到下面的主循环。
def load_data_chunks(path, fname, chunk_size):
receipt_dict = {}
with open(fname, "r") as plain_text:
for line in plain_text:
hash_value = line.strip()
receipt_dict[hash_value] = 1
if len(receipt_dict) > chunk_size:
yield receipt_dict
receipt_dict = {}
yield receipt_dict
for chunk in load_data_chunks("data.txt", 100):
try:
...