通过为具有JSON的列定义模式,为Hive表创建VIEW

时间:2017-09-18 15:32:08

标签: hive hdfs avro parquet

  1. 我将原始JSON字符串从我的Kafka流存储到HDFS作为镶木地板
  2. 我在Hive上为HDFS文件夹创建了一个外部表
  3. 现在我想为存储在Hive表中的RAW数据创建一个VIEW,
  4. Kafka Stream to HDFS

    reference_code = Hat17:08:21red

    Hive上的外部表

    public static void main(String[] args) throws Exception {
    
        String brokers = "quickstart:9092";
        String topics = "simple_topic_6";
        String master = "local[*]";
    
        SparkSession sparkSession = SparkSession
                .builder().appName(EventKafkaToParquet.class.getName())
                .master(master).getOrCreate();
        SQLContext sqlContext = sparkSession.sqlContext();
        SparkContext context = sparkSession.sparkContext();
        context.setLogLevel("ERROR");
    
        Dataset<Row> rawDataSet = sparkSession.readStream()
                .format("kafka")
                .option("kafka.bootstrap.servers", brokers)
                .option("subscribe", topics).load();
        rawDataSet.printSchema();
    
        rawDataSet = rawDataSet.withColumn("employee", rawDataSet.col("value").cast(DataTypes.StringType));
        rawDataSet.createOrReplaceTempView("basicView");
        Dataset<Row> writeDataset = sqlContext.sql("select employee from basicView");
        writeDataset
                .repartition(1)
                .writeStream()
                .option("path","/user/cloudera/employee/")
                .option("checkpointLocation", "/user/cloudera/employee.checkpoint/")
                .format("parquet")
                .trigger(Trigger.ProcessingTime(5000))
                .start()
                .awaitTermination();
    }
    

    现在我想在employee_raw表的顶部创建一个HIVE视图,它将输出显示为

    CREATE EXTERNAL TABLE employee_raw ( employee STRING )  
    STORED AS PARQUET
    LOCATION '/user/cloudera/employee' ;
    

    employee_raw表的输出是

    firstName, lastName, street, city, state, zip
    

    感谢您的意见

1 个答案:

答案 0 :(得分:1)

根据您的描述,我认为您主要喜欢“Extract values from JSON string in Hive”,因此您可以在linked thread中找到答案。