我正在Azure HDInsight中运行一个简单的地图缩小作业,以下是我们正在运行的命令:
java -jar WordCount201.jar wasb://hexhadoopcluster-2019-05-15t07-01-07-193z@hexanikahdinsight.blob.core.windows.net/hexa/CustData.csv wasb://hexhadoopcluster-2019-05-15t07-01-07-193z@hexanikahdinsight.blob.core.windows.net/hexa
出现以下错误:
java.io.IOException: No FileSystem for scheme: wasb
答案 0 :(得分:0)
对于Java,请使用jdk1.8及以下版本的POM org.apache.hadoop hadoop-mapreduce-examples2.7.3scope>提供org.apache.hadoophadoop-mapreduce-client-common2.7.3providedjdk.toolsjdk.toolsorg.apache.hadoophadoop- common2.7.3提供
答案 1 :(得分:0)
WASB是HDFS文件系统的包装。我不确定您是否可以在普通的Java程序中使用它。您是否有参考资料/链接?
您可以尝试获取与custData.csv文件等效的https。下面是我可以使用WASB在HDInsight集群上提交的Spark作业示例。
spark-submit \
--class com.nileshgule.movielens.MovieRatingAnalysis \
--master yarn \
--deploy-mode cluster \
--executor-memory 1g \
--name MoviesCsvReader \
--conf "spark.app.id=MovieRatingAnalysis" \
wasb://hd-spark-cluster-2019@hdsparkclusterstorage.blob.core.windows.net/learning-spark-1.0.jar \
wasb://hd-spark-cluster-2019@hdsparkclusterstorage.blob.core.windows.net/ml-latest/ratings.csv \
wasb://hd-spark-cluster-2019@hdsparkclusterstorage.blob.core.windows.net/ml-latest/movies.csv
这是一个使用等效的https URI传递相同文件的示例
spark-submit \
--class com.nileshgule.movielens.MovieRatingAnalysis \
--master yarn \
--deploy-mode cluster \
--executor-memory 1g \
--name MoviesCsvReader \
--conf "spark.app.id=MovieRatingAnalysis" \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/learning-spark-1.0.jar \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/ml-latest/ratings.csv \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/ml-latest/movies.csv
答案 2 :(得分:0)
对于hadoop作业,请从root用户运行jar。登录到HDinsight后,运行命令sudo su-。然后创建一个文件夹,然后将jar放入该文件夹并运行jar。