Apache beam:未指定Runner,并且在类路径中找不到DirectRunner

时间:2018-05-21 20:58:06

标签: java apache-beam

我正在使用Apache Beam代码构建一个gradle java项目(请参阅下面的内容)并在Eclipse Oxygen上执行。

package com.xxxx.beam;

import java.io.IOException;

import org.apache.beam.runners.spark.SparkContextOptions;
import org.apache.beam.runners.spark.SparkPipelineResult;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineRunner;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.SimpleFunction;
import org.apache.beam.sdk.values.KV;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.beam.sdk.io.FileIO;
import org.apache.beam.sdk.io.FileIO.ReadableFile;

public class ApacheBeamTestProject {

    public void modelExecution(){

        SparkContextOptions options = (SparkContextOptions) PipelineOptionsFactory.create();
         options.setSparkMaster("xxxxxxxxx"); 


         JavaSparkContext sc = options.getProvidedSparkContext();

         JavaLinearRegressionWithSGDExample.runJavaLinearRegressionWithSGDExample(sc);

         Pipeline p = Pipeline.create(options);


              p.apply(FileIO.match().filepattern("hdfs://path/to/*.gz"))
                     // withCompression can be omitted - by default compression is detected from the filename.
                     .apply(FileIO.readMatches())
                     .apply(MapElements
                         // uses imports from TypeDescriptors
                             .via(
                                     new SimpleFunction <ReadableFile, KV<String,String>>() {

                                        private static final long serialVersionUID = -5715607038612883677L;

                                                @SuppressWarnings("unused")
                                                public KV<String,String> createKV(ReadableFile f) {
                                                      String temp = null;
                                                      try{
                                                          temp = f.readFullyAsUTF8String();
                                                      }catch(IOException e){

                                                      }
                                                    return KV.of(f.getMetadata().resourceId().toString(), temp);
                                                  }
                                                }
                                     ))
                     .apply(FileIO.write())

                     ;

         SparkPipelineResult result = (SparkPipelineResult) p.run();

         result.getState();
    }

    public static void main(String[] args) throws IOException {
        System.out.println("Test log");

         PipelineOptions options = PipelineOptionsFactory.create();

         Pipeline p = Pipeline.create(options);

          p.apply(FileIO.match().filepattern("hdfs://path/to/*.gz"))
                 // withCompression can be omitted - by default compression is detected from the filename.
                 .apply(FileIO.readMatches())
                 .apply(MapElements
                     // uses imports from TypeDescriptors
                         .via(
                                 new SimpleFunction <ReadableFile, KV<String,String>>() {

                                    private static final long serialVersionUID = -5715607038612883677L;

                                            @SuppressWarnings("unused")
                                            public KV<String,String> createKV(ReadableFile f) {
                                                  String temp = null;
                                                  try{
                                                      temp = f.readFullyAsUTF8String();
                                                  }catch(IOException e){

                                                  }
                                                return KV.of(f.getMetadata().resourceId().toString(), temp);
                                              }
                                            }
                                 ))
                 .apply(FileIO.write());
         p.run();
    }
}

在Eclipse中执行此项目时,我观察到以下错误。

Test log
Exception in thread "main" java.lang.IllegalArgumentException: No Runner was specified and the DirectRunner was not found on the classpath.
Specify a runner by either:
    Explicitly specifying a runner by providing the 'runner' property
    Adding the DirectRunner to the classpath
    Calling 'PipelineOptions.setRunner(PipelineRunner)' directly
    at org.apache.beam.sdk.options.PipelineOptions$DirectRunner.create(PipelineOptions.java:291)
    at org.apache.beam.sdk.options.PipelineOptions$DirectRunner.create(PipelineOptions.java:281)
    at org.apache.beam.sdk.options.ProxyInvocationHandler.returnDefaultHelper(ProxyInvocationHandler.java:591)
    at org.apache.beam.sdk.options.ProxyInvocationHandler.getDefault(ProxyInvocationHandler.java:532)
    at org.apache.beam.sdk.options.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:155)
    at org.apache.beam.sdk.options.PipelineOptionsValidator.validate(PipelineOptionsValidator.java:95)
    at org.apache.beam.sdk.options.PipelineOptionsValidator.validate(PipelineOptionsValidator.java:49)
    at org.apache.beam.sdk.PipelineRunner.fromOptions(PipelineRunner.java:44)
    at org.apache.beam.sdk.Pipeline.create(Pipeline.java:150)

此项目不包含pom.xml文件。 Gradle已设置所有链接。 我不知道如何解决这个错误?有人可以建议吗?

1 个答案:

答案 0 :(得分:2)

您似乎正在尝试使用DirectRunner,而它不在应用程序的类路径中。您可以通过向您的应用程序添加beam-runners-direct-java依赖项来提供它:

https://mvnrepository.com/artifact/org.apache.beam/beam-runners-direct-java

编辑(评论中回答):您正在尝试在spark上运行此代码,但未在PipelineOptions中指定它。 Beam默认尝试在DirectRunner上运行代码,所以我认为这就是你得到这个错误的原因。指定: 在创建管道之前options.setRunner(SparkRunner.class);设置正确的转轮并解决问题。