用于垃圾邮件/火腿分类的单词减少程序

时间:2019-02-03 04:11:03

标签: python hadoop-streaming word-count reducers

我正在为Hadoop流编写一个reducer(python3),它无法正常工作,例如对于以下输入:

data ='狗\ t1 \ t1 \ ndog \ t1 \ t1 \ ndog \ t0 \ t1 \ ndog \ t0 \ t1 \ ncat \ t0 \ t1 \ ncat \ t0 \ t1 \ ncat \ t1 \ t1 \ n'

import re
import sys

# initialize trackers
current_word = None

spam_count, ham_count = 0,0

# read from standard input
# Substitute read from a file


for line in data.splitlines():
#for line in sys.stdin:
# parse input
    word, is_spam, count = line.split('\t')
    count = int(count)

    if word == current_word:

        if is_spam == '1':
            spam_count += count
        else:
            ham_count += count
    else:
        if current_word:
        # word to emit...
            if spam_count:
               print("%s\t%s\t%s" % (current_word, '1', spam_count))
            print("%s\t%s\t%s" % (current_word, '0', ham_count))

        if is_spam == '1':
            current_word, spam_count = word, count
        else:
            current_word, ham_count = word, count



if current_word == word:
    if is_spam == '1':
        print(f'{current_word}\t{is_spam}\t{spam_count}')
    else:
        print(f'{current_word}\t{is_spam}\t{spam_count}')

我得到了:

#dog    1   2
#dog    0   2
#cat    1   3

两个“垃圾邮件”狗以及两个“火腿”狗都可以。猫的状况不佳,应该是:

#dog    1   2
#dog    0   2
#cat    0   2
#cat    1   1
  • 我在这里找不到错误*

1 个答案:

答案 0 :(得分:0)

原因是:您应该使ham_count无效,不仅要更新spam_count,反之亦然。

重写

if is_spam == '1':
    current_word, spam_count = word, count
else:
    current_word, ham_count = word, count

if is_spam == '1':
    current_word, spam_count = word, count
    ham_count = 0
else:
    current_word, ham_count = word, count
    spam_count = 0

尽管如此,输出不会与您的输出完全一样
1)因为您总是先打印spam_count(但在示例输出中,“ cat ham”发出的时间更早)
2)输出块仅发出垃圾邮件或仅发出垃圾邮件,具体取决于is_spam变量的当前状态,但是我想,您正在计划全部发出垃圾邮件,对吗?

The output: 
dog 1   2
dog 0   2
cat 1   1

-“猫垃圾邮件”的计数正确,但是没有“猫火腿”的计数-我想,您至少应该打印以下内容:

重写此代码

if current_word == word:
    if is_spam == '1':
        print(f'{current_word}\t{is_spam}\t{spam_count}')
    else:
        print(f'{current_word}\t{is_spam}\t{spam_count}')

print(f'{current_word}\t{1}\t{spam_count}')
print(f'{current_word}\t{0}\t{ham_count}')

-和完整的输出将是

dog 1   2
dog 0   2
cat 1   1
cat 0   2

Itertools
另外,itertools模块非常适合执行类似任务:

import itertools    

splitted_lines = map(lambda x: x.split('\t'), data.splitlines())
grouped = itertools.groupby(splitted_lines, lambda x: x[0])

grouped是itertools.goupby对象,它是生成器-因此,请注意,它是惰性的,它仅返回一次值(因此,我在这里仅显示输出,因为它消耗生成器值)

[(gr_name, list(gr)) for gr_name, gr in grouped] 
Out:
[('dog',
  [['dog', '1', '1'],
   ['dog', '1', '1'],
   ['dog', '0', '1'],
   ['dog', '0', '1']]),
 ('cat', [['cat', '0', '1'], ['cat', '0', '1'], ['cat', '1', '1']])]

好吧,现在可以根据is_spam的地理位置将每个组重新分组:

import itertools    

def sum_group(group):
    """
    >>> sum_group([('1', [['dog', '1', '1'], ['dog', '1', '1']]), ('0', [['dog', '0', '1'], ['dog', '0', '1']])])
    [('1', 2), ('0', 2)]
    """
    return sum([int(i[-1]) for i in group])

splitted_lines = map(lambda x: x.split('\t'), data.splitlines())
grouped = itertools.groupby(splitted_lines, lambda x: x[0])

[(name, [(tag_name, sum_group(sub_group))
         for tag_name, sub_group 
         in itertools.groupby(group, lambda x: x[1])])
 for name, group in grouped]
Out:
[('dog', [('1', 2), ('0', 2)]), ('cat', [('0', 2), ('1', 1)])]

通过itertools完成示例:

import itertools 


def emit_group(name, tag_name, group):
    tag_sum = sum([int(i[-1]) for i in group])
    print(f"{name}\t{tag_name}\t{tag_sum}")  # emit here
    return (name, tag_name, tag_sum)  # return the same data


splitted_lines = map(lambda x: x.split('\t'), data.splitlines())
grouped = itertools.groupby(splitted_lines, lambda x: x[0])


emitted = [[emit_group(name, tag_name, sub_group) 
            for tag_name, sub_group 
            in itertools.groupby(group, lambda x: x[1])]
            for name, group in  grouped]
Out:
dog 1   2
dog 0   2
cat 0   2
cat 1   1

-emitted包含具有相同数据的元组列表。由于它是lazy方法,因此可以完美地与流一起使用;如果您有兴趣,here是很好的iterools教程。