问题:
我开始学习hadoop,但是,我需要使用python将大量文件保存到其中。 我似乎无法弄清楚我做错了什么。任何人都可以帮我这个吗?
以下是我的代码。
我认为HDFS_PATH
是正确的,因为我在安装时没有在设置中更改它。
pythonfile.txt
在我的桌面上(通过命令行运行的python代码也是如此)。
代码:
import hadoopy
import os
hdfs_path ='hdfs://localhost:9000/python'
def main():
hadoopy.writetb(hdfs_path, [('pythonfile.txt',open('pythonfile.txt').read())])
main()
输出 当我运行上面的代码时,我得到的是python本身的一个目录。
iMac-van-Brian:desktop Brian$ $HADOOP_HOME/bin/hadoop dfs -ls /python
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
14/10/28 11:30:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-rw-r--r-- 1 Brian supergroup 236 2014-10-28 11:30 /python
答案 0 :(得分:2)
这是subprocess
模块的非常典型的任务。解决方案如下:
put = Popen(["hadoop", "fs", "-put", <path/to/file>, <path/to/hdfs/file], stdin=PIPE, bufsize=-1)
put.communicate()
完整示例
假设您在服务器上,并且与hdfs的连接经过验证(例如,您已经叫.keytab
)。
您刚刚从pandas.DataFrame
创建了一个csv,并希望将其放入hdfs。
您可以按照以下步骤将文件上传到hdfs:
import os
import pandas as pd
from subprocess import PIPE, Popen
# define path to saved file
file_name = "saved_file.csv"
# create a pandas.DataFrame
sales = {'account': ['Jones LLC', 'Alpha Co', 'Blue Inc'], 'Jan': [150, 200, 50]}
df = pd.DataFrame.from_dict(sales)
# save your pandas.DataFrame to csv (this could be anything, not necessarily a pandas.DataFrame)
df.to_csv(file_name)
# create path to your username on hdfs
hdfs_path = os.path.join(os.sep, 'user', '<your-user-name>', file_name)
# put csv into hdfs
put = Popen(["hadoop", "fs", "-put", file_name, hdfs_path], stdin=PIPE, bufsize=-1)
put.communicate()
然后,csv文件将位于/user/<your-user-name/saved_file.csv
。
注意-如果您是通过Hadoop中调用的python脚本创建此文件的,则中间csv文件可能存储在某些随机节点上。由于(大概)不再需要该文件,因此最佳做法是删除该文件,以免每次调用脚本时都污染节点。您只需在上述脚本的最后一行添加os.remove(file_name)
即可解决此问题。
答案 1 :(得分:0)
我有一种感觉,你打算写一个名为'/ python'的文件,而你打算将它作为存储文件的目录
是什么
hdfs dfs -cat /python
告诉你?
如果它显示文件内容,你需要做的就是编辑你的hdfs_path以包含文件名(你应该首先使用-rm删除/ python)否则,使用pydoop(pip install pydoop)并执行以下操作:
import pydoop.hdfs as hdfs
from_path = '/tmp/infile.txt'
to_path ='hdfs://localhost:9000/python/outfile.txt'
hdfs.put(from_path, to_path)
答案 2 :(得分:0)
我找到了这个答案Scanner.nextDouble
:
import subprocess
def run_cmd(args_list):
"""
run linux commands
"""
# import subprocess
print('Running system command: {0}'.format(' '.join(args_list)))
proc = subprocess.Popen(args_list, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
s_output, s_err = proc.communicate()
s_return = proc.returncode
return s_return, s_output, s_err
#Run Hadoop ls command in Python
(ret, out, err)= run_cmd(['hdfs', 'dfs', '-ls', 'hdfs_file_path'])
lines = out.split('\n')
#Run Hadoop get command in Python
(ret, out, err)= run_cmd(['hdfs', 'dfs', '-get', 'hdfs_file_path', 'local_path'])
#Run Hadoop put command in Python
(ret, out, err)= run_cmd(['hdfs', 'dfs', '-put', 'local_file', 'hdfs_file_path'])
#Run Hadoop copyFromLocal command in Python
(ret, out, err)= run_cmd(['hdfs', 'dfs', '-copyFromLocal', 'local_file', 'hdfs_file_path'])
#Run Hadoop copyToLocal command in Python
(ret, out, err)= run_cmd(['hdfs', 'dfs', '-copyToLocal', 'hdfs_file_path', 'local_file'])
hdfs dfs -rm -skipTrash /path/to/file/you/want/to/remove/permanently
#Run Hadoop remove file command in Python
(ret, out, err)= run_cmd(['hdfs', 'dfs', '-rm', 'hdfs_file_path'])
(ret, out, err)= run_cmd(['hdfs', 'dfs', '-rm', '-skipTrash', 'hdfs_file_path'])
#rm -r
#HDFS Command to remove the entire directory and all of its content from #HDFS.
#Usage: hdfs dfs -rm -r <path>
(ret, out, err)= run_cmd(['hdfs', 'dfs', '-rm', '-r', 'hdfs_file_path'])
(ret, out, err)= run_cmd(['hdfs', 'dfs', '-rm', '-r', '-skipTrash', 'hdfs_file_path'])
#Check if a file exist in HDFS
#Usage: hadoop fs -test -[defsz] URI
#Options:
#-d: f the path is a directory, return 0.
#-e: if the path exists, return 0.
#-f: if the path is a file, return 0.
#-s: if the path is not empty, return 0.
#-z: if the file is zero length, return 0.
#Example:
#hadoop fs -test -e filename
hdfs_file_path = '/tmpo'
cmd = ['hdfs', 'dfs', '-test', '-e', hdfs_file_path]
ret, out, err = run_cmd(cmd)
print(ret, out, err)
if ret:
print('file does not exist')