两个文件。一个是数据损坏,另一个是修复。破碎:
ID 0
T5 rat cake
~EOR~
ID 1
T1 wrong segg
T2 wrong nacob
T4 rat tart
~EOR~
ID 3
T5 rat pudding
~EOR~
ID 4
T1 wrong sausag
T2 wrong mspa
T3 strawberry tart
~EOR~
ID 6
T5 with some rat in it
~EOR~
修正:
ID 1
T1 eggs
T2 bacon
~EOR~
ID 4
T1 sausage
T2 spam
T4 bereft of loif
~EOR~
EOR表示记录结束。请注意,Broken文件比修复文件有更多记录,修复文件具有要修复的标记(T1,T2等标记)和要添加的标记。这段代码正是它应该做的事情:
# foobar.py
import codecs
source = 'foo.dat'
target = 'bar.dat'
result = 'result.dat'
with codecs.open(source, 'r', 'utf-8_sig') as s, \
codecs.open(target, 'r', 'utf-8_sig') as t, \
codecs.open(result, 'w', 'utf-8_sig') as u:
sID = ST1 = sT2 = sT4 = ''
RecordFound = False
# get source data, record by record
for sline in s:
if sline.startswith('ID '):
sID = sline
if sline.startswith('T1 '):
sT1 = sline
if sline.startswith('T2 '):
sT2 = sline
if sline.startswith('T4 '):
sT4 = sline
if sline.startswith('~EOR~'):
for tline in t:
# copy target file lines, replacing when necesary
if tline == sID:
RecordFound = True
if tline.startswith('T1 ') and RecordFound:
tline = sT1
if tline.startswith('T2 ') and RecordFound:
tline = sT2
if tline.startswith('~EOR~') and RecordFound:
if sT4:
tline = sT4 + tline
RecordFound = False
u.write(tline)
break
u.write(tline)
for tline in t:
u.write(tline)
我正在写一个新文件,因为我不想搞砸另外两个。第一个外部for循环在修复文件的最后一条记录上完成。此时,仍有记录要写入目标文件。这是最后一个for-clause的作用。
唠叨我,最后一行隐含地选择了第一个内部for循环最后被破坏的地方。好像它应该对t'中的其余tline说'。另一方面,我不知道如何用更少(或不多)的代码行(使用dicts和你有什么)来做到这一点。我应该担心吗?
请评论。
答案 0 :(得分:2)
我不担心。在您的示例中,t
是一个文件句柄,您正在迭代它。 Python中的文件句柄是它们自己的迭代器;他们拥有关于他们在文件中读取的位置的状态信息,并会在您迭代它们时保留其位置。您可以查看file.next()的python文档以获取更多信息。
另见另一个SO答案,也谈到了迭代器:What does the "yield" keyword do in Python?。那里有很多有用的信息!
编辑:这是使用词典组合它们的另一种方法。如果要在输出之前对记录进行其他修改,则可能需要使用此方法:
import sys
def get_records(source_lines):
records = {}
current_id = None
for line in source_lines:
if line.startswith('~EOR~'):
continue
# Split the line up on the first space
tag, val = [l.rstrip() for l in line.split(' ', 1)]
if tag == 'ID':
current_id = val
records[current_id] = {}
else:
records[current_id][tag] = val
return records
if __name__ == "__main__":
with open(sys.argv[1]) as f:
broken = get_records(f)
with open(sys.argv[2]) as f:
fixed = get_records(f)
# Merge the broken and fixed records
repaired = broken
for id in fixed.keys():
repaired[id] = dict(broken[id].items() + fixed[id].items())
with open(sys.argv[3], 'w') as f:
for id, tags in sorted(repaired.items()):
f.write('ID {}\n'.format(id))
for tag, val in sorted(tags.items()):
f.write('{} {}\n'.format(tag, val))
f.write('~EOR~\n')
dict(broken[id].items() + fixed[id].items())
部分利用了这个:
How to merge two Python dictionaries in a single expression?
答案 1 :(得分:1)
# building initial storage
content = {}
record = {}
order = []
current = None
with open('broken.file', 'r') as f:
for line in f:
items = line.split(' ', 1)
try:
key, value = items
except:
key, = items
value = None
if key == 'ID':
current = value
order.append(current)
content[current] = record = {}
elif key == '~EOR~':
current = None
record = {}
else:
record[key] = value
# patching
with open('patches.file', 'r') as f:
for line in f:
items = line.split(' ', 1)
try:
key, value = items
except:
key, = items
value = None
if key == 'ID':
current = value
record = content[current] # updates existing records only!
# if there is no such id -> raises
# alternatively you may check and add them to the end of list
# if current in content:
# record = content[current]
# else:
# order.append(current)
# content[current] = record = {}
elif key == '~EOR~':
current = None
record = {}
else:
record[key] = value
# patched!
# write-out
with open('output.file', 'w') as f:
for current in order:
out.write('ID '+current+'\n')
record = content[current]
for key in sorted(record.keys()):
out.write(key + ' ' + (record[key] or '') + '\n')
# job's done
问题?
答案 2 :(得分:0)
为了完整起见,只是为了分享我的热情和学到的东西,下面是我现在使用的代码。它回答了我的OP等等。
它部分基于上面的akaRem方法。单个函数填充dict。它被调用两次,一次用于修复文件,一次用于文件修复。
import codecs, collections
from GetInfiles import *
sourcefile, targetfile = GetInfiles('dat')
# GetInfiles reads two input parameters from the command line,
# verifies they exist as files with the right extension,
# and then returns their names. Code not included here.
resultfile = targetfile[:-4] + '_result.dat'
def recordlist(infile):
record = collections.OrderedDict()
reclist = []
with codecs.open(infile, 'r', 'utf-8_sig') as f:
for line in f:
try:
key, value = line.split(' ', 1)
except:
key = line
# so this line must be '~EOR~\n'.
# All other lines must have the shape 'tag: content\n'
# so if this errors, there's something wrong with an input file
if not key.startswith('~EOR~'):
try:
record[key].append(value)
except KeyError:
record[key] = [value]
else:
reclist.append(record)
record = collections.OrderedDict()
return reclist
# put files into ordered dicts
source = recordlist(sourcefile)
target = recordlist(targetfile)
# patching
for fix in source:
for record in target:
if fix['ID'] == record['ID']:
record.update(fix)
# write-out
with codecs.open(resultfile, 'w', 'utf-8_sig') as f:
for record in target:
for tag, field in record.iteritems():
for occ in field:
line = u'{} {}'.format(tag, occ)
f.write(line)
f.write('~EOR~\n')
现在是一个有序的词典。这不在我的OP中,但文件需要由人进行交叉检查,因此保持顺序会使这更容易。 (Using OrderedDict is really easy。我第一次尝试找到这个功能让我觉得有点冒犯,但它的文档让我很担心。没有例子,恐吓行话......)
此外,它现在支持记录中任何给定标记的多次出现。这也不在我的OP中,但我需要这个。 (这种格式称为'Adlib tagged',它是编目软件。)
与akaRem的方法不同的是修补,使用update
作为目标字典。我发现这通常与python一样,非常优雅。同样适用于startswith
。这是我无法抗拒分享它的另外两个原因。
我希望它有用。