我希望根据18到25岁之间的用户年龄组找到热门网站页面访问量。
我有两个文件,一个包含用户名,年龄和其他文件包含用户名,网站名称
示例 -
users.txt包含 - >约翰,22岁 pages.txt包含 - >约翰,google.com
我在python中编写了以下内容,它的工作方式与我在hadoop之外的预期相同。
import os
os.chdir("/home/pythonlab")
#Top sites visited by users aged 18 to 25
#read the users file
lines = open("users.txt")
users = [ line.split(",") for line in lines] #user name, age (eg - john, 22)
userlist = [ (u[0],int(u[1])) for u in users] #split the user name and age
#read the page visit file
pages = open("pages.txt")
page = [p.split(",") for p in pages] #user name, website visited (eg - john,google.com)
pagelist = [ (p[0],p[1]) for p in page]
#map user and page visits & filter age group between 18 and 25
usrpage = [[p[1],u[0]] for u in userlist for p in pagelist if (u[0] == p[0] and u[1]>=18 and u[1]<=25) ]
for z in usrpage:
print(z[0].strip('\r\n')+",1") #print website name, 1
例如输出:
yahoo.com,1
google.com,1
现在我想用hadoop流解决这个问题。
我的问题是,如何在我的mapper中处理这两个命名文件(users.txt,pages.txt),因为我们通常只将输入目录传递给hadoop流。