Kafka Stream to HDFS
reference_code = Hat17:08:21red
Hive上的外部表
public static void main(String[] args) throws Exception {
String brokers = "quickstart:9092";
String topics = "simple_topic_6";
String master = "local[*]";
SparkSession sparkSession = SparkSession
.builder().appName(EventKafkaToParquet.class.getName())
.master(master).getOrCreate();
SQLContext sqlContext = sparkSession.sqlContext();
SparkContext context = sparkSession.sparkContext();
context.setLogLevel("ERROR");
Dataset<Row> rawDataSet = sparkSession.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", brokers)
.option("subscribe", topics).load();
rawDataSet.printSchema();
rawDataSet = rawDataSet.withColumn("employee", rawDataSet.col("value").cast(DataTypes.StringType));
rawDataSet.createOrReplaceTempView("basicView");
Dataset<Row> writeDataset = sqlContext.sql("select employee from basicView");
writeDataset
.repartition(1)
.writeStream()
.option("path","/user/cloudera/employee/")
.option("checkpointLocation", "/user/cloudera/employee.checkpoint/")
.format("parquet")
.trigger(Trigger.ProcessingTime(5000))
.start()
.awaitTermination();
}
现在我想在employee_raw表的顶部创建一个HIVE视图,它将输出显示为
CREATE EXTERNAL TABLE employee_raw ( employee STRING )
STORED AS PARQUET
LOCATION '/user/cloudera/employee' ;
employee_raw表的输出是
firstName, lastName, street, city, state, zip
感谢您的意见
答案 0 :(得分:1)
根据您的描述,我认为您主要喜欢“Extract values from JSON string in Hive”,因此您可以在linked thread中找到答案。