从Scala脚本中执行hdfs命令

时间:2019-06-03 03:50:38

标签: scala apache-spark hadoop

我试图从Spark在集群模式下执行的scala脚本内部执行HDFS特定命令。在命令下方:

val cmd = Seq("hdfs","dfs","-copyToLocal","/tmp/file.dat","/path/to/local")
val result = cmd.!!

作业在此阶段失败,并显示以下错误:

java.io.FileNotFoundException: /var/run/cloudera-scm-agent/process/2087791-yarn-NODEMANAGER/log4j.properties (Permission denied)
        at java.io.FileInputStream.open0(Native Method)
        at java.io.FileInputStream.open(FileInputStream.java:195)
        at java.io.FileInputStream.<init>(FileInputStream.java:138)
        at java.io.FileInputStream.<init>(FileInputStream.java:93)
        at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
        at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
        at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
        at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
        at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
        at org.apache.log4j.Logger.getLogger(Logger.java:104)
        at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:262)
        at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

但是,当我在Spark shell中单独运行同一命令时,它执行得很好,并且文件也被复制了。

scala> val cmd = Seq("hdfs","dfs","-copyToLocal","/tmp/file_landing_area/file.dat","/tmp/local_file_area")
cmd: Seq[String] = List(hdfs, dfs, -copyToLocal, /tmp/file_landing_area/file.dat, /tmp/local_file_area)

scala> val result = cmd.!!
result: String = ""

我不明白“拒绝权限”错误。尽管显示为FileNotFoundException。完全令人困惑。

有什么想法吗?

1 个答案:

答案 0 :(得分:-1)

根据错误,它正在将hdfs数据检查到var文件夹中,我怀疑它存在配置问题,或者它没有指向正确的文件夹。 使用seq并执行HDFS命令不是一个好习惯。仅对火花壳有用。不建议在代码中使用相同的方法。而不是尝试使用下面的Scala文件系统API将数据从或移到HDFS。请检查下面的示例代码,以获取可能对您有帮助的参考。

import org.apache.hadoop.fs
import org.apache.hadoop.fs._
val conf = new Configuration()

val fs = path.getFileSystem(conf)

val hdfspath = new Path("hdfs:///user/nikhil/test.csv")
val localpath = new Path("file:///home/cloudera/test/")

fs.copyToLocalFile(hdfspath,localpath)

请使用下面的链接获取有关Scala文件系统API的更多参考。

https://hadoop.apache.org/docs/r2.9.0/api/org/apache/hadoop/fs/FileSystem.html#copyFromLocalFile(boolean,%20boolean,%20org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.Path)