加载日志记录时出错,LOC头无效

时间:2018-05-19 05:21:35

标签: scala apache-spark apache-spark-sql pom.xml

我有以下pom.xml: -

<dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.11</artifactId>
        <version>2.1.0</version>
        <exclusions>
            <exclusion>
                <groupId>log4j</groupId>
                <artifactId>log4j</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
     <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
        <version>2.1.1</version>
        <exclusions>
            <exclusion>
                <groupId>org.slf4j</groupId>
                <artifactId>slf4j-log4j12</artifactId>
            </exclusion>
            <exclusion>
                <groupId>log4j</groupId>
                <artifactId>log4j</artifactId>
            </exclusion>
        </exclusions>
    </dependency> 

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.11</artifactId>
        <version>2.1.0</version>
    </dependency>

    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.11</artifactId>
        <version>2.1.0</version>
    </dependency>
</dependencies>

我在运行代码时遇到以下错误: -

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/internal/Logging
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)

我知道Logging在Spark 1.5.2或更早版本中可用,但是我想在2.1版本上工作并且我的所有jar都升级了然后为什么我得到这个日志记录错误以及如何在没有它的情况下摆脱它降级任何罐子

我的代码是: -

 val spark = SparkSession
  .builder
  .appName("Test Data")
  .master("local[*]")
  .getOrCreate()
import spark.implicits._

val df = spark
  .readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", "192.168.0.40:9092")
  .option("zookeeper.connect", "192.168.0.40:2181")
  .option("subscribe", "topic")
  .option("startingOffsets", "earliest")
  .option("max.poll.records", 100)
  .option("failOnDataLoss", false)
  .load()
import org.apache.spark.sql.Encoders
val schema = Encoders.product[event].schema

 val ds = df.select(from_json($"value" cast "string", schema)).as[event]

val query = ds.writeStream
  .outputMode("append")
  .queryName("table")
  .format("console")
  .start()
query.awaitTermination()

0 个答案:

没有答案