我有一个关于 Python 的速度/效率相关问题:
我需要从嵌套的JSON文件中提取多个字段(写入.txt
文件后,它们具有〜 64k 行,当前代码段在〜 9中完成)分钟),
每行可以包含浮点数和字符串。
通常,我只将所有数据放入numpy
中,然后使用np.savetxt()
保存。.
我只是简单地将线组装成字符串,但这很慢。到目前为止,我正在这样做:
我对此有几个问题:
-这会导致出现更多单独的file.write()
命令,这些命令也非常慢。(大约64k * 8个调用(用于8个文件))
所以我的问题是:
speed vs memory-consumption
来最有效地写入磁盘。DEFAULT_BUFFER_SIZE
吗? (目前为8192)我已经检查了这个File I/O in Every Programming Language和这个python org: IO,但是除了(根据我的理解,文件io应该已经在python 3.6.x中已经被缓冲)以外,没有太大帮助。我的默认DEFAULT_BUFFER_SIZE
是8192
。
提前感谢您的帮助!
这是“我的摘录”的一部分-
def read_json_line(line=None):
result = None
try:
result = json.loads(line)
except Exception as e:
# Find the offending character index:
idx_to_replace = int(str(e).split(' ')[-1].replace(')',''))
# Remove the offending character:
new_line = list(line)
new_line[idx_to_replace] = ' '
new_line = ''.join(new_line)
return read_json_line(line=new_line)
return result
def extract_features_and_write(path_to_data, inp_filename, is_train=True):
# It's currently having 8 lines of file.write(), which is probably making it slow as writing to disk is involving a lot of overheads as well
features = ['meta_tags__twitter-data1', 'url', 'meta_tags__article-author', 'domain', 'title', 'published__$date',\
'content', 'meta_tags__twitter-description']
prefix = 'train' if is_train else 'test'
feature_files = [open(os.path.join(path_to_data,'{}_{}.txt'.format(prefix, feat)),'w', encoding='utf-8')
for feat in features]
with open(os.path.join(PATH_TO_RAW_DATA, inp_filename),
encoding='utf-8') as inp_json_file:
for line in tqdm_notebook(inp_json_file):
for idx, features in enumerate(features):
json_data = read_json_line(line)
content = json_data['meta_tags']["twitter:data1"].replace('\n', ' ').replace('\r', ' ').split()[0]
feature_files[0].write(content + '\n')
content = json_data['url'].split('/')[-1].lower()
feature_files[1].write(content + '\n')
content = json_data['meta_tags']['article:author'].split('/')[-1].replace('@','').lower()
feature_files[2].write(content + '\n')
content = json_data['domain']
feature_files[3].write(content + '\n')
content = json_data['title'].replace('\n', ' ').replace('\r', ' ').lower()
feature_files[4].write(content + '\n')
content = json_data['published']['$date']
feature_files[5].write(content + '\n')
content = json_data['content'].replace('\n', ' ').replace('\r', ' ')
content = strip_tags(content).lower()
content = re.sub(r"[^a-zA-Z0-9]", " ", content)
feature_files[6].write(content + '\n')
content = json_data['meta_tags']["twitter:description"].replace('\n', ' ').replace('\r', ' ').lower()
feature_files[7].write(content + '\n')
答案 0 :(得分:1)
摘自评论:
为什么您认为8次写入会导致8次物理写入硬盘?文件对象本身会缓冲要写入的内容,如果它决定写入操作系统,则操作系统可能还要稍等一会儿才能进行物理写入-甚至您的harrdrives都有可能使文件内容保持一段时间直到开始的缓冲区真正写。参见How often does python flush to a file?
您不应将异常用作控制流,也不应在不需要它的地方递归使用。每次递归都会为函数调用准备新的调用堆栈-这需要资源和时间-并且所有这些都必须还原。
最好的办法是在将数据提供给json.load()之前清理数据...下一个最好的办法是避免递归...尝试以下方式:< / p>
def read_json_line(line=None):
result = None
while result is None and line: # empty line is falsy, avoid endless loop
try:
result = json.loads(line)
except Exception as e:
result = None
# Find the offending character index:
idx_to_replace = int(str(e).split(' ')[-1].replace(')',''))
# slice away the offending character:
line = line[:idx_to_replace]+line[idx_to_replace+1:]
return result