在java中使用Apache Spark读取TSV文件的最佳方法

时间:2016-08-03 18:38:46

标签: java csv apache-spark

我有一个TSV文件,第一行是标题。我想从这个文件创建一个JavaPairRDD。目前,我正在使用以下代码:

myValue

我想知道是否有办法让javaSparkContext直接读取并处理文件,而不是将操作分成两部分。

编辑:这不是How do I convert csv file to rdd的重复,因为我正在寻找Java中的答案,而不是Scala。

4 个答案:

答案 0 :(得分:3)

使用https://github.com/databricks/spark-csv

import org.apache.spark.sql.SQLContext

SQLContext sqlContext = new SQLContext(sc);
DataFrame df = sqlContext.read()
    .format("com.databricks.spark.csv")
    .option("inferSchema", "true")
    .option("header", "true")
    .option("delimiter","\t")
    .load("cars.csv");

df.select("year", "model").write()
    .format("com.databricks.spark.csv")
    .option("header", "true")
    .save("newcars.csv");

答案 1 :(得分:1)

尝试以下代码来读取CSV文件并创建JavaPairRDD。

public class SparkCSVReader {

public static void main(String[] args) {

    SparkConf conf = new SparkConf().setAppName("CSV Reader");
    JavaSparkContext sc = new JavaSparkContext(conf);
    JavaRDD<String> allRows = sc.textFile("c:\\temp\\test.csv");//read csv file
    String header = allRows.first();//take out header
    JavaRDD<String> filteredRows = allRows.filter(row -> !row.equals(header));//filter header
    JavaPairRDD<String, MyCSVFile> filteredRowsPairRDD = filteredRows.mapToPair(parseCSVFile);//create pair
    filteredRowsPairRDD.foreach(data -> {
        System.out.println(data._1() + " ### " + data._2().toString());// print row and object
    });
    sc.stop();
    sc.close();
}

private static PairFunction<String, String, MyCSVFile> parseCSVFile = (row) -> {
    String[] fields = row.split(",");
    return new Tuple2<String, MyCSVFile>(row, new MyCSVFile(fields[0], fields[1], fields[2]));
};

}

您也可以使用Databricks spark-csv(https://github.com/databricks/spark-csv)。 spark-csv也包含在Spark 2.0.0中。

答案 2 :(得分:0)

我是uniVocity-parsers的作者,并且无法帮助你解决问题,但我相信这样的事情对您有用:

parserSettings.setHeaderExtractionEnabled(true); //captures the header row

parserSettings.setProcessor(new AbstractRowProcessor(){
        @Override
        public void rowProcessed(String[] row, ParsingContext context) {
            String[] headers = context.headers() //not sure if you need them
            JavaPairRDD<String, MyObject> myObjectRDD = javaSparkContext
                    .mapToPair(row -> new Tuple2<>(row[0], myObjectFromArray(row)));
            //process your stuff.
        }
    });

如果要对每行的处理进行并行处理,可以包装ConcurrentRowProcessor

parserSettings.setProcessor(new ConcurrentRowProcessor(new AbstractRowProcessor(){
        @Override
        public void rowProcessed(String[] row, ParsingContext context) {
            String[] headers = context.headers() //not sure if you need them
            JavaPairRDD<String, MyObject> myObjectRDD = javaSparkContext
                    .mapToPair(row -> new Tuple2<>(row[0], myObjectFromArray(row)));
            //process your stuff.
        }
    }, 1000)); //1000 rows loaded in memory.

然后调用解析:

new TsvParser(parserSettings).parse(myFile);

希望这有帮助!

答案 3 :(得分:0)

Apache Spark 2.x有内置的csv阅读器,所以你不必使用https://github.com/databricks/spark-csv

import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;

/**
 *
 * @author cpu11453local
 */
public class Main {
    public static void main(String[] args) {


        SparkSession spark = SparkSession.builder()
                .master("local")
                .appName("meowingful")
                .getOrCreate();

        Dataset<Row> df = spark.read()
                    .option("header", "true")
                    .option("delimiter","\t")
                    .csv("hdfs://127.0.0.1:9000/data/meow_data.csv");

        df.show();
    }
}

maven档案pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.meow.meowingful</groupId>
    <artifactId>meowingful</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>jar</packaging>
    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
    </properties>

    <dependencies>
        <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11 -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.2.0</version>
        </dependency>


        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>2.2.0</version>
        </dependency>
    </dependencies>

</project>