我正在尝试从精通Apache Spark 2.x书中运行示例。
scala> val df = sc.parallelize(Array(1,2,3)).toDF
df: org.apache.spark.sql.DataFrame = [value: int]
我是Spark世界的新手,但我想应该将数据帧保存到HDFS
scala> df.write.json("hdfs://localhost:9000/tmp/account.json")
java.net.ConnectException: Call From miki/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
我与dfsadmin检查了
hadoop dfsadmin -safemode enter
WARNING: Use of this script to execute dfsadmin is deprecated.
WARNING: Attempting to execute replacement "hdfs dfsadmin" instead.
safemode: FileSystem file:/// is not an HDFS file system
jps输出
miki@miki:~$ jps
13798 Jps
10906 SparkSubmit
该如何解决?
答案 0 :(得分:1)
根据jps输出,您没有运行读写HDFS(namenode,datanode,resourcemanager)所需的必要Hadoop守护程序。确保在计算机上运行start-yarn和start-dfs,以使HDFS正常运行。