Scala Spark - 从简单的数据帧创建嵌套的json输出

时间:2016-07-04 15:25:48

标签: json apache-spark apache-spark-sql spark-dataframe

感谢您的回复。但我面临的问题是将这些结构写入嵌套的json。不知怎的'tojson'不起作用,只是跳过嵌套的字段,导致一个扁平的结构。如何将嵌套的json格式写入HDFS?

1 个答案:

答案 0 :(得分:1)

您应该从必须嵌套在一起的字段中创建结构字段。 以下是一个工作示例: 假设您拥有包含公司名称,员工和部门名称的csv格式的员工数据,并且您希望以json格式列出每个公司的每个部门的所有员工。以下是相同的代码。

  import java.util.List;
  import org.apache.spark.sql.Dataset;
  import org.apache.spark.sql.Row;
  import org.apache.spark.sql.RowFactory;
  import org.apache.spark.sql.SparkSession;
  import org.apache.spark.sql.api.java.UDF2;
  import org.apache.spark.sql.types.DataTypes;
  import org.apache.spark.sql.types.StructField;

  import scala.collection.mutable.WrappedArray;
public class JsonExample {
public static void main(String [] args)
 {
    SparkSession sparkSession = SparkSession
              .builder()
              .appName("JsonExample")
              .master("local")
              .getOrCreate();

    //read the csv file
    Dataset<Row> employees = sparkSession.read().option("header", "true").csv("/tmp/data/emp.csv");
    //create the temp view
    employees.createOrReplaceTempView("employees");

    //First , group the employees based on company AND department 
    sparkSession.sql("select company,department,collect_list(name) as department_employees from employees group by company,department").createOrReplaceTempView("employees");
    /*Now create a struct by invoking the UDF create_struct. 
     * The struct will contain department and the list of employees 
    */
    sparkSession.sql("select company,collect_list(struct(department,department_employees)) as department_info from employees group by company").toJSON().show(false);



 }
}

您可以在我的博客上找到相同的示例: http://baahu.in/spark-how-to-generate-nested-json-using-dataset/