数据传输spark-cassandra连接器的构建失败

时间:2015-11-04 11:51:21

标签: apache-spark sbt datastax spark-cassandra-connector

我正在尝试构建spark-cassandra连接器并遵循以下链接:

http://www.planetcassandra.org/blog/kindling-an-introduction-to-spark-with-cassandra/

链接中还有一个要求从git下载连接器并使用sbt构建。但是,当我尝试运行命令./sbt/sbt assembly时。它抛出以下异常:

Launching sbt from sbt/sbt-launch-0.13.8.jar
[info] Loading project definition from /home/naresh/Desktop/spark-cassandra-connector/project
Using releases: https://oss.sonatype.org/service/local/staging/deploy/maven2 for releases
Using snapshots: https://oss.sonatype.org/content/repositories/snapshots for snapshots

  Scala: 2.10.5 [To build against Scala 2.11 use '-Dscala-2.11=true']
  Scala Binary: 2.10
  Java: target=1.7 user=1.7.0_79

[info] Set current project to root (in build file:/home/naresh/Desktop/spark-cassandra-connector/)
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [info] Compiling 140 Scala sources and 1 Java source to /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/target/scala-2.10/classes...
    [error] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/src/main/scala/org/apache/spark/sql/cassandra/CassandraCatalog.scala:48: not found: value processTableIdentifier
    [error]     val id = processTableIdentifier(tableIdentifier).reverse.lift
    [error]              ^
    [error] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/src/main/scala/org/apache/spark/sql/cassandra/CassandraCatalog.scala:134: value toSeq is not a member of org.apache.spark.sql.catalyst.TableIdentifier
    [error]     cachedDataSourceTables.refresh(tableIdent.toSeq)
    [error]                                               ^
    [error] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/src/main/scala/org/apache/spark/sql/cassandra/CassandraSQLContext.scala:94: not found: value BroadcastNestedLoopJoin
    [error]       BroadcastNestedLoopJoin
    [error]       ^
    [error] three errors found
    [info] Compiling 11 Scala sources to /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector-embedded/target/scala-2.10/classes...
    [warn] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector-embedded/src/main/scala/com/datastax/spark/connector/embedded/SparkTemplate.scala:69: value actorSystem in class SparkEnv is deprecated: Actor system is no longer supported as of 1.4.0
    [warn]   def actorSystem: ActorSystem = SparkEnv.get.actorSystem
    [warn]                                               ^
    [warn] one warning found
    [error] (spark-cassandra-connector/compile:compileIncremental) Compilation failed
    [error] Total time: 27 s, completed 4 Nov, 2015 12:34:33 PM

1 个答案:

答案 0 :(得分:0)

这对我有用,  运行mvn -DskipTests clean package

  • 您可以在spark {目录}的build spark command文件中找到README.md
  • 在运行该命令之前您需要配置Maven以使用更多 通过设置MAVEN_OPTS比平时更好的内存 export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"