我试图实现的目标
我真正需要的是创建一个可以读取"值"中的JSON数据的HIVE表。列和应用模式并发出列,以便我可以根据需要在RAW数据上创建各种表。
我已经在JSON文件上创建了hive表,并提取了列,但是这个来自镶木地板和应用JSON模式的提取列正在欺骗我
employee-sample.json
{"name":"Dave", "age" : 30 , "DOB":"1987-01-01"}
{"name":"Steve", "age" : 31 , "DOB":"1986-01-01"}
{"name":"Kumar", "age" : 32 , "DOB":"1985-01-01"}
将JSON转换为镶木地板的简单Spark代码
simple-loader.java
public static void main(String[] args) {
SparkSession sparkSession = SparkSession.builder()
.appName(JsonToParquet.class.getName())
.master("local[*]").getOrCreate();
Dataset<String> eventsDataSet = sparkSession.read().textFile("D:\\dev\\employee-sample.json");
eventsDataSet.createOrReplaceTempView("rawView");
sparkSession.sqlContext().sql("select string(value) as value from rawView")
.write()
.parquet("D:\\dev\\" + UUID.randomUUID().toString());
sparkSession.close();
}
镶木地板文件上的hive表
CREATE EXTERNAL TABLE EVENTS_RAW (
VALUE STRING)
STORED AS PARQUET
LOCATION 'hdfs://XXXXXX:8020/employee/data_raw';
我试过设置JSON serde,但只有当数据存储在JSON foram,ROW FORMAT SERDE&com; proofpoint.hive.serde.JsonSerde&#39;
时它才有用。预期格式
CREATE EXTERNAL TABLE EVENTS_DATA (
NAME STRING,
AGE STRING,
DOB STRING)
??????????????????????????????
答案 0 :(得分:0)
创建hive外部表示例:
public static final String CREATE_EXTERNAL = "CREATE EXTERNAL TABLE %s" +
" (%s) " +
" PARTITIONED BY(%s) " +
" STORED AS %s" +
" LOCATION '%s'";
/**
* Will create an external table and recover the partitions
*/
public void createExternalTable(SparkSession sparkSession, StructType schema, String tableName, SparkFormat format, List<StructField> partitions, String tablePath){
String createQuery = createTableString(schema, tableName, format, partitions, tablePath);
logger.info("Going to create External table with the following query:\n " + createQuery);
sparkSession.sql(createQuery);
logger.debug("Finish to create External table with the following query:\n " + createQuery);
recoverPartitions(sparkSession, tableName);
}
public String createTableString(StructType schema, String tableName, SparkFormat format, List<StructField> partitions, String tablePath){
Set<String> partitionNames = partitions.stream().map(struct -> struct.name()).collect(Collectors.toSet());
String columns = Arrays.stream(schema.fields())
//Filter the partitions
.filter(field -> !partitionNames.contains(field.name()))
//
.map(HiveTableHelper::fieldToStringBuilder)
.collect(Collectors.joining(", "));
String partitionsString = partitions.stream().map(HiveTableHelper::fieldToStringBuilder).collect(Collectors.joining(", "));
return String.format(CREATE_EXTERNAL, tableName, columns, partitionsString, format.name(), tablePath);
}
/**
*
* @param sparkSession
* @param table
*/
public void recoverPartitions(SparkSession sparkSession, String table){
String query = "ALTER TABLE " + table + " RECOVER PARTITIONS";
logger.debug("Start: " + query);
sparkSession.sql(query);
sparkSession.catalog().refreshTable(table);
logger.debug("Finish: " + query);
}
public static StringBuilder fieldToStringBuilder(StructField field){
StringBuilder sb = new StringBuilder();
sb.append(field.name()).append( " ").append(field.dataType().simpleString());
return sb;
}