我正在对配置单元表A实施增量过程; 表A-已在配置单元中创建,并在YearMonth(YYYYMM列)上进行了分区,并具有完整的卷。
我们正在持续计划从源中导入更新/插入并捕获到配置单元Delta表中;
如下图所示,增量表指示新更新与分区有关(201804/201611/201705)。
对于渐进式流程,我正计划
INSERT INTO delta2从表中选择YYYYMM,其中YYYYMM在(选择 与Delta不同的YYYYMM);
将Delta表中的这3个分区与原始表中的相应分区合并。 (我可以按照Horton Works的4步策略来应用更新)
Merge Delta2 + Delta : = new 3 partitions.
从原始表中删除3个分区
Alter Table Drop partitions 201804 / 201611 / 201705
将新合并的分区添加回原始表(具有新更新)
我需要使这些脚本自动化-您能建议如何在蜂巢QL或spark中放置上述逻辑吗?-明确标识分区并将其从原始表中删除。
答案 0 :(得分:0)
您可以使用pyspark构建解决方案。我将通过一些基本示例来说明这种方法。您可以根据您的业务要求对其进行修改。
假设配置下面的配置单元中有一个分区表。
CREATE TABLE IF NOT EXISTS udb.emp_partition_Load_tbl (
emp_id smallint
,emp_name VARCHAR(30)
,emp_city VARCHAR(10)
,emp_dept VARCHAR(30)
,emp_salary BIGINT
)
PARTITIONED BY (Year String, Month String)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '|'
STORED AS ORC;
您获得了一些带有某些输入记录的csv文件,您希望将这些记录加载到分区表中
1|vikrant singh rana|Gurgaon|Information Technology|20000
dataframe = spark.read.format("com.databricks.spark.csv") \
.option("mode", "DROPMALFORMED") \
.option("header", "false") \
.option("inferschema", "true") \
.schema(userschema) \
.option("delimiter", "|").load("file:///filelocation/userinput")
newdf = dataframe.withColumn('year', lit('2018')).withColumn('month',lit('01'))
+------+------------------+--------+----------------------+----------+----+-----+
|emp-id|emp-name |emp-city|emp-department |emp-salary|year|month|
+------+------------------+--------+----------------------+----------+----+-----+
|1 |vikrant singh rana|Gurgaon |Information Technology|20000 |2018|01 |
+------+------------------+--------+----------------------+----------+----+-----+
spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
spark.sql("set spark.hadoop.hive.exec.dynamic.partition=true");
spark.sql("set spark.hadoop.hive.exec.dynamic.partition.mode=nonstrict");
newdf.write.format('orc').mode("overwrite").insertInto('udb.emp_partition_Load_tbl')
让我们说您有另一组数据,并想插入到其他分区中
+------+--------+--------+--------------+----------+----+-----+
|emp-id|emp-name|emp-city|emp-department|emp-salary|year|month|
+------+--------+--------+--------------+----------+----+-----+
| 2| ABC| Gurgaon|HUMAN RESOURCE| 10000|2018| 02|
+------+--------+--------+--------------+----------+----+-----+
newdf.write.format('orc').mode("overwrite").insertInto('udb.emp_partition_Load_tbl')
> show partitions udb.emp_partition_Load_tbl;
+---------------------+--+
| partition |
+---------------------+--+
| year=2018/month=01 |
| year=2018/month=02 |
+---------------------+--+
假设您有另一组与现有分区有关的记录。
3|XYZ|Gurgaon|HUMAN RESOURCE|80000
newdf = dataframe.withColumn('year', lit('2018')).withColumn('month',lit('02'))
+------+--------+--------+--------------+----------+----+-----+
|emp-id|emp-name|emp-city|emp-department|emp-salary|year|month|
+------+--------+--------+--------------+----------+----+-----+
| 3| XYZ| Gurgaon|HUMAN RESOURCE| 80000|2018| 02|
+------+--------+--------+--------------+----------+----+-----+
newdf.write.format('orc').mode("overwrite").insertInto('udb.emp_partition_Load_tbl')
select * from udb.emp_partition_Load_tbl where year ='2018' and month ='02';
+---------+-----------+-----------+-----------------+-------------+-------+--------+--+
| emp_id | emp_name | emp_city | emp_dept | emp_salary | year | month |
+---------+-----------+-----------+-----------------+-------------+-------+--------+--+
| 3 | XYZ | Gurgaon | HUMAN RESOURCE | 80000 | 2018 | 02 |
| 2 | ABC | Gurgaon | HUMAN RESOURCE | 10000 | 2018 | 02 |
+---------+-----------+-----------+-----------------+-------------+-------+--------+--+
您可以在下面看到未分割的其他分配数据。
> select * from udb.emp_partition_Load_tbl where year ='2018' and month ='01';
+---------+---------------------+-----------+-------------------------+-------------+-------+--------+--+
| emp_id | emp_name | emp_city | emp_dept | emp_salary | year | month |
+---------+---------------------+-----------+-------------------------+-------------+-------+--------+--+
| 1 | vikrant singh rana | Gurgaon | Information Technology | 20000 | 2018 | 01 |
+---------+---------------------+-----------+-------------------------+-------------+-------+--------+--+