如何使用Spark从csv文件编写avro文件?

时间:2017-05-09 22:58:50

标签: java csv apache-spark avro spark-avro

当我尝试从csv文件创建的DF中编写avro文件时,我遇到了NullPointerException:

 public static void main(String[] args) {
    SparkSession spark = SparkSession
        .builder()
        .appName("SparkCsvToAvro")
        .master("local")
        .getOrCreate();

    SQLContext context = new SQLContext(spark);

    String path = "C:\\git\\sparkCsvToAvro\\src\\main\\resources";
    DataFrameReader read = context.read();
    Dataset<Row> csv = read.csv(path);
    DataFrameWriter<Row> write = csv.write();
    DataFrameWriter<Row> format = write.format("com.databricks.spark.avro");
    format.save("C:\\git\\sparkCsvToAvro\\src\\main\\resources\\avro");
}

我的pom.xml:

<properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <junit.version>4.12</junit.version>
    <spark-core.version>2.1.0</spark-core.version>
    <maven-compiler-plugin.version>3.5.1</maven-compiler-plugin.version>
    <maven-compiler-plugin.source>1.8</maven-compiler-plugin.source>
    <maven-compiler-plugin.target>1.8</maven-compiler-plugin.target>
    <spark-avro.version>3.2.0</spark-avro.version>
    <spark-csv.version>1.5.0</spark-csv.version>
    <spark-sql.version>2.1.0</spark-sql.version>
</properties>

...
<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>${maven-compiler-plugin.version}</version>
            <configuration>
                <source>${maven-compiler-plugin.source}</source>
                <target>${maven-compiler-plugin.target}</target>
            </configuration>
        </plugin>
    </plugins>
</build>

<dependencies>

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.11</artifactId>
        <version>${spark-core.version}</version>
    </dependency>

    <dependency>
        <groupId>com.databricks</groupId>
        <artifactId>spark-avro_2.11</artifactId>
        <version>${spark-avro.version}</version>
    </dependency>

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.11</artifactId>
        <version>${spark-sql.version}</version>
    </dependency>

</dependencies>

异常堆栈跟踪:

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
...
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:404)

我不知道自己做错了什么? 也许依赖关系不正确? 或者这只是一个不好的实践吗?

npe在这里:DataFrameWriter<Row> format = write.format("com.databricks.spark.avro"); format.save("C:\\git\\sparkCsvToAvro\\src\\main\\resources\\avro");

&#34;格式&#34;是的,我不知道为什么?

1 个答案:

答案 0 :(得分:1)

在Spark 2.0中解析CSV的方法是

首先默认初始化SparkSession对象,它在shell中可用作spark

val spark = org.apache.spark.sql.SparkSession.builder
        .master("local")
        .appName("Spark CSV Reader")
        .getOrCreate;

现在使用SparkSessions对象将CSV加载为DataFrame / DataSet

val df = spark.read
        .format("com.databricks.spark.csv")
        .option("header", "true") //reading the headers
        .option("mode", "DROPMALFORMED")
        .load("csv/file/path"); //.csv("csv/file/path") //spark 2.0 api

df.show()

Databricks提供了库spark-avro,它可以帮助我们读取和编写Avro数据。

df.write.format("com.databricks.spark.avro").save(outputPath)