将列添加到spark数据集并转换数据

时间:2017-04-10 12:27:42

标签: java apache-spark dataset

我正在将镶木地板文件作为spark数据集加载。我可以从查询中查询和创建新数据集。现在,我想在数据集中添加一个新列(“hashkey”)并生成值(例如md5sum(nameValue))。我怎样才能做到这一点?

public static void main(String[] args) {

    SparkConf sparkConf = new SparkConf();

    sparkConf.setAppName("Hello Spark");
    sparkConf.setMaster("local");

    SparkSession spark = SparkSession.builder().appName("Java Spark SQL basic example")
            .config("spark.master", "local").config("spark.sql.warehouse.dir", "file:///C:\\spark_warehouse")
            .getOrCreate();

    Dataset<org.apache.spark.sql.Row> df = spark.read().parquet("meetup.parquet");
    df.show();

    df.createOrReplaceTempView("tmpview");

    Dataset<Row> namesDF = spark.sql("SELECT * FROM tmpview where name like 'Spark-%'");

    namesDF.show();

}

输出如下:

+-------------+-----------+-----+---------+--------------------+
|         name|meetup_date|going|organizer|              topics|
+-------------+-----------+-----+---------+--------------------+
|    Spark-H20| 2016-01-01|   50|airisdata|[h2o, repeated sh...|
|   Spark-Avro| 2016-01-02|   60|airisdata|    [avro, usecases]|
|Spark-Parquet| 2016-01-03|   70|airisdata| [parquet, usecases]|
+-------------+-----------+-----+---------+--------------------+

2 个答案:

答案 0 :(得分:1)

在查询中添加 MD5 的spark sql函数。

Dataset<Row> namesDF = spark.sql("SELECT *, md5(name) as modified_name FROM tmpview where name like 'Spark-%'");

答案 1 :(得分:0)

Dataset<Row> ds = sqlContext.read()
    .format("com.databricks.spark.csv")
    .option("inferSchema", "true")
    .option("header", "true")
    .option("delimiter","|")
    .load("/home/cloudera/Desktop/data.csv");
ds.printSchema();

将打印出来:

root
 |-- ReferenceValueSet_Id: integer (nullable = true)
 |-- ReferenceValueSet_Name: string (nullable = true)
 |-- Code_Description: string (nullable = true)
 |-- Code_Type: string (nullable = true)
 |-- Code: string (nullable = true)
 |-- CURR_FLAG: string (nullable = true)
 |-- REC_CREATE_DATE: timestamp (nullable = true)
 |-- REC_UPDATE_DATE: timestamp (nullable = true)

Dataset<Row> df1 = ds.withColumn("Key", functions.lit(1));
        df1.printSchema();

添加上面的代码后,它会追加一个具有常量值的列。

root
 |-- ReferenceValueSet_Id: integer (nullable = true)
 |-- ReferenceValueSet_Name: string (nullable = true)
 |-- Code_Description: string (nullable = true)
 |-- Code_Type: string (nullable = true)
 |-- Code: string (nullable = true)
 |-- CURR_FLAG: string (nullable = true)
 |-- REC_CREATE_DATE: timestamp (nullable = true)
 |-- REC_UPDATE_DATE: timestamp (nullable = true)
 |-- Key: integer (nullable = true)

您可以看到名称为Key的列已添加到数据集中。

如果您想在constunt值的位置添加一些列,可以使用下面的代码添加它。

Dataset<Row> df1 = ds.withColumn("Key", functions.lit(ds.col("Code")));
        df1.printSchema();
        df1.show();

现在它将打印watever值存在于CODE列中。进入名为Key的新的列。