来自spark和来自shell蜂巢的相同操作效果不同,为什么?

时间:2020-07-17 12:29:02

标签: apache-spark hadoop hive orc

此代码从spark插入数据

String warehouseLocation = new File("spark-warehouse").getAbsolutePath();
        SparkSession sparkSession = SparkSession.builder()
                .appName(appName)
                .config("spark.sql.warehouse.dir", warehouseLocation)
                .config("spark.sql.catalogImplementation","hive")
                .enableHiveSupport()
                .config("hive.exec.dynamic.partition", "true")
                .config("hive.exec.dynamic.partition.mode", "nonstrict")
                .getOrCreate();
        JavaStreamingContext jssc = new JavaStreamingContext(new JavaSparkContext(sparkSession.sparkContext()),
                Durations.seconds(duration));

        SQLContext sqlContext = sparkSession.sqlContext();
        sqlContext.sql("CREATE TABLE IF NOT EXISTS " + tableName + " (value1 STRING, value2 STRING, value3 STRING, " +
                "value4 STRING, value5 STRING, value6 STRING, value7 STRING) PARTITIONED BY (year STRING, mounth STRING, day STRING)" +
                " STORED AS ORC");

        
        sqlContext.sql("SET hive.merge.tezfiles=true");
        sqlContext.sql("SET hive.merge.mapfiles=true");
        sqlContext.sql( "SET hive.merge.size.per.task=256000000");
        sqlContext.sql ( "SET hive.merge.smallfiles.avgsize=16000000");
        sqlContext.sql("SET hive.merge.orcfile.stripe.level=true;");


        Map<String, Object> kafkaParams = new HashMap<>();
        kafkaParams.put("bootstrap.servers", broker);
        kafkaParams.put("key.deserializer", StringDeserializer.class);
        kafkaParams.put("value.deserializer", StringDeserializer.class);
        kafkaParams.put("group.id", "use_a_separate_group_id_for_each_stream");
        kafkaParams.put("auto.offset.reset", "latest");
        kafkaParams.put("enable.auto.commit", false);

        Collection<String> topicsSet = Collections.singletonList(topic);

        // Create direct kafka stream with brokers and topics
        JavaInputDStream<ConsumerRecord<String, String>> messages = KafkaUtils.createDirectStream(
                jssc,
                LocationStrategies.PreferConsistent(),
                ConsumerStrategies.Subscribe(topicsSet, kafkaParams));

        // Get the lines, split them into words, count the words and print
        JavaDStream<String> lines = messages.map(ConsumerRecord::value);
        lines.foreachRDD(new VoidFunction<JavaRDD<String>>() {
            @Override
            public void call(JavaRDD<String> rdd) {
                if (!rdd.isEmpty()) {
                    JavaRDD<Data> dataRDD = rdd.map(new Function<String, Data>() {
                        @Override
                        public Data call(String msg) {
                            try {
                                return Data.insertDataByString(msg);
                            } catch (ParseException e) {
                                e.printStackTrace();
                            }

                            return null;
                        }
                    });

                    Dataset<Row> dataRow = sqlContext.createDataFrame(dataRDD, Data.class);
                    dataRow.createOrReplaceTempView("temp_table");

                    sqlContext.sql("insert into " + tableName + " partition(year,mounth,day) select value1, value2, " +
                            "value3, value4, value5, value6, value7, year, mounth, day from temp_table");
                    //dataRow.write().format("orc").partitionBy("year", "day").mode(SaveMode.Append).insertInto(tableName);
                    //sqlContext.sql("ALTER TABLE " + tableName + " PARTITION(year='2020', mounth='4', day='26') " +  " CONCATENATE");

                }
            }

执行此代码时,将在以下位置创建表 hdfs://master.vmware.local:8020 / apps / spark / warehouse / tablename / year = 2020 / mounth = 4 / day = 26 并在day = 26中存在更多file.c000 相反,如果从蜂巢壳创建表,该表位于其他位置, hdfs://master.vmware.local:8020 / warehouse / tablespace / managed / hive / table_name / year = 2020 / mounth = 4 / day = 26 / 并在day = 26中存在文件:_orc_acid_version和_bucket_000000

我的目标是使用spark创建orc文件,但是我认为使用spark我将使用hive的文件默认值进行保存。
如何将Hive中的Spark数据保存到ocr文件?

0 个答案:

没有答案