我试图将我的天蓝色blob存储注册到我的火花流,但得到这个代码&错误: -
代码: -
SparkConf sparkConf = new SparkConf().setAppName("JavaNetworkWordCount");
JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, Durations.seconds(1));
ssc.textFileStream("wasb[s]://mycontainer@rtest.blob.core.windows.net/");
ssc.start();
ssc.awaitTermination();
不确定WASB链接的路径应该是什么
链接说我应该给出一个路径,但我的容器没有任何路径。图像直接存储在容器中。
错误: -
java.lang.IllegalArgumentException: requirement failed: No output operations registered, so nothing to execute
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.streaming.DStreamGraph.validate(DStreamGraph.scala:163)
at org.apache.spark.streaming.StreamingContext.validate(StreamingContext.scala:513)
at org.apache.spark.streaming.StreamingContext.liftedTree1$1(StreamingContext.scala:573)
at org.apache.spark.streaming.StreamingContext.start(StreamingContext.scala:572)
at org.apache.spark.streaming.api.java.JavaStreamingContext.start(JavaStreamingContext.scala:554)
at org.bnr.process_panos.JavaNetworkWordCount.main(JavaNetworkWordCount.java:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
答案 0 :(得分:0)
您可以使用相对路径或绝对路径。例如,可以使用以下方法之一引用HDInsight群集附带的hadoop-mapreduce-examples.jar文件:
Example1 :wasb://mycontainer@myaccount.blob.core.windows.net/example/jars/hadoop-mapreduce-examples.jar
示例2: wasb:///example/jars/hadoop-mapreduce-examples.jar
Example3 :/ example / jars / hadoop-mapreduce-examples.jar
如果没有DStream上的输出运算符,则会发生以下错误消息,不会调用任何计算。您需要在流上调用以下任何方法。
打印()强>
<强> foreachRDD(FUNC)强>
saveAsObjectFiles(前缀,[后缀])
saveAsTextFiles(前缀,[后缀])
saveAsHadoopFiles(前缀,[后缀])
有关详细信息,请参阅“http://spark.apache.org/docs/latest/streaming-programming-guide.html#output-operations”。