我正在使用Hadoop / MapReduce构建电影推荐 现在我只使用python来实现MapReduce进程。
所以我基本上在做的是分别运行每个mapper和reducer,并使用mapper的控制台输出到reducer。
我遇到的问题是python在终端中将值输出为字符串,因此如果我使用数字,则数字将作为字符串打印,这使得难以简化过程,因为它的转换会增加更多在服务器上加载。
那么如何解决这个问题,我希望使用纯python实现它,而不是第三方库。
import sys
def mapper():
'''
From Mapper1 : we need only UserID , (MovieID , rating)
as output.
'''
#* First mapper
# Read input line
for line in sys.stdin:
# Strip whitespace and delimiter - ','
print line
data = line.strip().split(',')
if len(data) == 4:
# Using array to print out values
# Direct printing , makes python interpret
# values with comma in between as tuples
# tempout = []
userid , movieid , rating , timestamp = data
# tempout.append(userid)
# tempout.append((movieid , float(rating)))
# print tempout
#
print "{0},({1},{2})".format(userid , movieid , rating)
这是reducer print语句:
def reducer():
oldKey = None
rating_arr = []
for line in sys.stdin:
# So we'll recieve user, (movie,rating)
# We need to group the tuples for unique users
# we'll append the tuples to an array
# Given that we have two data points , we'll split the
# data at only first occurance of ','
# This splits the string only at first comma
data = line.strip().split(',',1)
# print len(data) , data
# check for 2 data values
if len(data) != 2:
continue
x , y = data
if oldKey and oldKey != x:
print "{0},{1}".format(oldKey , rating_arr)
oldKey = x
rating_arr = []
oldKey = x
rating_arr.append(y)
# print rating_arr
if oldKey != None:
print "{0},{1}".format(oldKey , rating_arr)
`
输入是:
671,(4973,4.5)\n671,(4993,5.0)\n670,(4995,4.0)
输出结果为:
671,['(4973,4.5)', '(4993,5.0)']
670,['(4995,4.0)']
我需要原样的元组,没有字符串。
答案 0 :(得分:1)
数据是一个字符串,然后您将其分割并分配给y
这一事实,仍然是一个字符串。
如果你想要元组的原始值作为数字,你需要解析它们。
ast.literal_eval
可以提供帮助。
例如,
In [1]: line = """671,(4973,4.5)"""
In [2]: data = line.strip().split(',',1)
In [3]: data
Out[3]: ['671', '(4973,4.5)']
In [4]: x , y = data
In [5]: type(y)
Out[5]: str
In [6]: import ast
In [7]: y = ast.literal_eval(y)
In [8]: y
Out[8]: (4973, 4.5)
In [9]: type(y)
Out[9]: tuple
In [10]: type(y[0])
Out[10]: int
现在,如果您想切换到PySpark,您可以更好地控制变量/对象类型,而不是使用Hadoop Streaming的所有字符串