spark 1.4.0 java.lang.NoSuchMethodError:com.google.common.base.Stopwatch.elapsedMillis()J

时间:2015-06-24 17:59:53

标签: java scala apache-spark guava

我使用spark 1.4.0 / hadoop 2.6.0(仅适用于hdfs)和运行Scala SparkPageRank示例( examples / src / main / scala / org / apache / spark / examples / SparkPageRank.scala ),我遇到以下错误:

Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Stopwatch.elapsedMillis()J
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:245)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.RDD$$anonfun$distinct$2.apply(RDD.scala:329)
    at org.apache.spark.rdd.RDD$$anonfun$distinct$2.apply(RDD.scala:329)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)
    at org.apache.spark.rdd.RDD.distinct(RDD.scala:328)
    at org.apache.spark.examples.SparkPageRank$.main(SparkPageRank.scala:60)
    at org.apache.spark.examples.SparkPageRank.main(SparkPageRank.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:621)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

我对java并不是非常熟悉,但它似乎是guava版本问题

以下信息可能是helpfup:

$ find ./spark -name *.jars | grep guava
./lib_managed/bundles/guava-16.0.1.jar
./lib_managed/bundles/guava-14.0.1.jar

examples / pom.xml文件的一部分:

...
 <dependency>
      <groupId>org.apache.cassandra</groupId>
      <artifactId>cassandra-all</artifactId>
      <version>1.2.6</version>
      <exclusions>
        <exclusion>
          <groupId>com.google.guava</groupId>
          <artifactId>guava</artifactId>
        </exclusion>
...

事实上,该课程似乎不包含有问题的方法:

$ javap -p /mnt/spark/examples/target/streams/\$global/assemblyOption/\$global/streams/assembly/7850cb6d36b2a6589a4d27ce027a65a2da72c9df_5fa98cd1a63c99a44dd8d3b77e4762b066a5d0c5/com/google/common/base/Stopwatch.class

Compiled from "Stopwatch.java"
public final class com.google.common.base.Stopwatch {
  private final com.google.common.base.Ticker ticker;
  private boolean isRunning;
  private long elapsedNanos;
  private long startTick;
  public static com.google.common.base.Stopwatch createUnstarted();
  public static com.google.common.base.Stopwatch createUnstarted(com.google.common.base.Ticker);
  public static com.google.common.base.Stopwatch createStarted();
  public static com.google.common.base.Stopwatch createStarted(com.google.common.base.Ticker);
  public com.google.common.base.Stopwatch();
  public com.google.common.base.Stopwatch(com.google.common.base.Ticker);
  public boolean isRunning();
  public com.google.common.base.Stopwatch start();
  public com.google.common.base.Stopwatch stop();
  public com.google.common.base.Stopwatch reset();
  private long elapsedNanos();
  public long elapsed(java.util.concurrent.TimeUnit);
  public java.lang.String toString();
  private static java.util.concurrent.TimeUnit chooseUnit(long);
  private static java.lang.String abbreviate(java.util.concurrent.TimeUnit);
}

我想更好地理解这个问题,如果可能的话,学习如何修复它: - )

3 个答案:

答案 0 :(得分:4)

方法elapsedMilis()已在Guava 16中删除。(或计划删除 - 无论如何,您的商家信息中没有此名称的方法。)

到目前为止,我记得在番石榴16中应该有类似TimeUnit.MILLISECONDS的东西,或者你可以通过分割1000000000.0手动转换。

答案 1 :(得分:0)

尝试将Hadoop从2.6.0升级到2.6.5。在我的情况下,它使用函数HBaseAdmin.tableExists(其他依赖项:Hbase 1.2.0,Spark 2.0.1,Scala 2.11.8)解决了Stopwatch的问题,尽管Hbase中针对此问题的解决方案计划在1.3.0,目前无法用于生产link

答案 2 :(得分:0)

我正在使用Spark 2.4.4,仍然有这个问题!无论如何,我实际上并不需要看到此日志记录的输出,因此我只是更改了相关类的日志记录级别:

 <logger name="org.apache.hadoop.mapred.FileInputFormat" level="INFO" additivity="false">
    <appender-ref ref="CONSOLE"/>
</logger>