我是Python新手,我目前正在使用Python 2.我有一些源文件,每个源文件都包含大量数据(大约1900万行)。它看起来如下:
apple \t N \t apple
n&apos
garden \t N \t garden
b\ta\md
great \t Adj \t great
nice \t Adj \t (unknown)
etc
我的任务是在每个文件的第3列搜索一些目标词,每次在语料库中找到目标词时,必须将此词前后的10个词添加到多维词典中。
编辑:应排除包含'&','\'或字符串'(未知)'的行。
我尝试使用readlines()和enumerate()来解决这个问题,如下面的代码所示。代码执行它应该做的事情但显然对源文件中提供的数据量不够高效。
我知道readlines()或read()不应该用于大型数据集,因为它会将整个文件加载到内存中。然而,逐行读取文件,我没有设法使用枚举方法来获取目标词之前和之后的10个单词。 我也不能使用mmap,因为我没有权限在该文件上使用它。
因此,我认为具有一定大小限制的readlines方法将是最有效的解决方案。然而,为此,我不会做出一些错误,因为每次达到大小限制结束时,目标字不会被捕获,因为代码刚刚破坏了10个字?
def get_target_to_dict(file):
targets_dict = {}
with open(file) as f:
for line in f:
targets_dict[line.strip()] = {}
return targets_dict
targets_dict = get_target_to_dict('targets_uniq.txt')
# browse directory and process each file
# find the target words to include the 10 words before and after to the dictionary
# exclude lines starting with <,-,; to just have raw text
def get_co_occurence(path_file_dir, targets, results):
lines = []
for file in os.listdir(path_file_dir):
if file.startswith('corpus'):
path_file = os.path.join(path_file_dir, file)
with gzip.open(path_file) as corpusfile:
# PROBLEMATIC CODE HERE
# lines = corpusfile.readlines()
for line in corpusfile:
if re.match('[A-Z]|[a-z]', line):
if '(unknown)' in line:
continue
elif '\\' in line:
continue
elif '&' in line:
continue
lines.append(line)
for i, line in enumerate(lines):
line = line.strip()
if re.match('[A-Z][a-z]', line):
parts = line.split('\t')
lemma = parts[2]
if lemma in targets:
pos = parts[1]
if pos not in targets[lemma]:
targets[lemma][pos] = {}
counts = targets[lemma][pos]
context = []
# look at 10 previous lines
for j in range(max(0, i-10), i):
context.append(lines[j])
# look at the next 10 lines
for j in range(i+1, min(i+11, len(lines))):
context.append(lines[j])
# END OF PROBLEMATIC CODE
for context_line in context:
context_line = context_line.strip()
parts_context = context_line.split('\t')
context_lemma = parts_context[2]
if context_lemma not in counts:
counts[context_lemma] = {}
context_pos = parts_context[1]
if context_pos not in counts[context_lemma]:
counts[context_lemma][context_pos] = 0
counts[context_lemma][context_pos] += 1
csvwriter = csv.writer(results, delimiter='\t')
for k,v in targets.iteritems():
for k2,v2 in v.iteritems():
for k3,v3 in v2.iteritems():
for k4,v4 in v3.iteritems():
csvwriter.writerow([str(k), str(k2), str(k3), str(k4), str(v4)])
#print(str(k) + "\t" + str(k2) + "\t" + str(k3) + "\t" + str(k4) + "\t" + str(v4))
results = open('results_corpus.csv', 'wb')
word_occurrence = get_co_occurence(path_file_dir, targets_dict, results)
为了完整性,我复制了整个代码部分,因为它是一个函数的一部分,它从所有提取的信息中创建一个多维字典,然后将其写入csv文件。
我真的很感激任何提示或建议使这段代码更有效率。
编辑我更正了代码,因此它考虑了目标词之前和之后的确切10个字
答案 0 :(得分:3)
我的想法是创建一个缓冲区,在10行之前存储,另一个缓冲区在10行之后存储,因为正在读取文件,它将在缓冲区之前推入,如果大小超过10则缓冲区将弹出/ p>
对于后缓冲区,我从文件迭代器1st克隆另一个迭代器。然后在循环内并行运行迭代器,使用克隆迭代器运行10次迭代以获得10行之后。
这样可以避免使用readlines()并将整个文件加载到内存中。 希望它在实际情况下适合你
<强> 编辑: 强> 如果第3列不包含任何&#39;&amp;&#39;,&#39; \&#39;,&#39;(未知)&#39;。也只是填充之前的缓冲区(&#39; \ t&#39;)只是split()所以它会照顾所有空格或标签
import itertools
def get_co_occurence(path_file_dir, targets, results):
excluded_words = ['&', '\\', '(unknown)'] # modify excluded words here
for file in os.listdir(path_file_dir):
if file.startswith('testset'):
path_file = os.path.join(path_file_dir, file)
with open(path_file) as corpusfile:
# CHANGED CODE HERE
before_buf = [] # buffer to store before 10 lines
after_buf = [] # buffer to store after 10 lines
corpusfile, corpusfile_clone = itertools.tee(corpusfile) # clone file iterator to access next 10 lines
for line in corpusfile:
line = line.strip()
if re.match('[A-Z]|[a-z]', line):
parts = line.split()
lemma = parts[2]
# before buffer handling, fill buffer excluded line contains any of excluded words
if not any(w in line for w in excluded_words):
before_buf.append(line) # append to before buffer
if len(before_buf)>11:
before_buf.pop(0) # keep the buffer at size 10
# next buffer handling
while len(after_buf)<=10:
try:
after = next(corpusfile_clone) # advance 1 iterator
after_lemma = ''
after_tmp = after.split()
if re.match('[A-Z]|[a-z]', after) and len(after_tmp)>2:
after_lemma = after_tmp[2]
except StopIteration:
break # copy iterator will exhaust 1st coz its 10 iteration ahead
if after_lemma and not any(w in after for w in excluded_words):
after_buf.append(after) # append to buffer
# print 'after',z,after, ' - ',after_lemma
if (after_buf and line in after_buf[0]):
after_buf.pop(0) # pop off one ready for next
if lemma in targets:
pos = parts[1]
if pos not in targets[lemma]:
targets[lemma][pos] = {}
counts = targets[lemma][pos]
# context = []
# look at 10 previous lines
context= before_buf[:-1] # minus out current line
# look at the next 10 lines
context.extend(after_buf)
# END OF CHANGED CODE
# CONTINUE YOUR STUFF HERE WITH CONTEXT
答案 1 :(得分:1)
用Python 3.5编写的功能替代方案。我简化了你的例子,双方只拿了5个字。关于垃圾值过滤还有其他简化,但它只需要稍作修改。我将使用PyPI中的包TARGET_WORDS = {"target1", "target2"}
# this is out predicate
def istarget(word: str) -> bool:
return word in TARGET_WORDS
来使这个功能代码更自然地阅读。
def isjunk(word: str) -> bool:
return word == "(unknown)"
def first_and_last(words: List[str]) -> (List[str], List[str]):
first = words[:5]
last = words[-5:]
return first, last
首先我们需要提取列:
words = (F() >> (map, str.strip) >> (filter, bool) >> (map, getcol3) >> (filterfalse, isjunk))(lines)
groups = groupby(words, istarget)
然后我们需要将这些行拆分成由谓词分隔的块:
def is_target_group(group: Tuple[str, List[str]]) -> bool:
return istarget(group[0])
def unpack_word_group(group: Tuple[str, List[str]]) -> List[str]:
return [*group[1]]
def unpack_target_group(group: Tuple[str, List[str]]) -> List[str]:
return [group[0]]
def process_group(group: Tuple[str, List[str]]):
return (unpack_target_group(group) if is_target_group(group)
else first_and_last(unpack_word_group(group)))
让过滤垃圾并写一个函数来取最后一个和前5个字:
words = list(map(process_group, groups))
现在,让我们来看看小组:
from io import StringIO
buffer = """
_\t_\tword
_\t_\tword
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\ttarget1
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\tword
_\t_\ttarget2
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\ttarget1
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\tword
"""
# this simulates an opened file
lines = StringIO(buffer)
现在,处理组
[(['word', 'word', 'word', 'word', 'word'],
['word', 'word', 'word', 'word', 'word']),
(['target1'], ['target1']),
(['word', 'word', 'word', 'word'], ['word', 'word', 'word', 'word']),
(['target2'], ['target2']),
(['word', 'word', 'word', 'word', 'word'],
['word', 'word', 'word', 'word', 'word']),
(['target1'], ['target1']),
(['word', 'word', 'word', 'word'], ['word', 'word', 'word', 'word'])]
最后的步骤是:
ngOnInit() {
let factory: ComponentFactory<any> =
this._componentFactoryResolver.resolveComponentFactory(this.ComponentType);
this._Component = this._viewContainer.createComponent(factory);
this._Component.instance.Value = this.Value;
this._Component.changeDetectorRef.detectChanges(); // What does it do? :(
}
P.S。
这是我的测试用例:
@Input() Value: any; // But I think @Input decorator is useless here?
鉴于此文件,您将获得此输出:
<activity android:name="your_activity_name" android:theme="@android:style/Theme.Dialog" />
从这里你可以删除前5个单词和最后5个单词。