Spark 2.0.0使用jdbc从Redshift表中截断

时间:2016-12-05 11:09:30

标签: apache-spark apache-spark-sql amazon-redshift databricks

您好我正在使用Redshift的Spark SQL(2.0.0),我要截断我的表。我正在使用这个spark-redshift包&我想知道如何截断我的表。任何人都可以分享这个例子吗?

2 个答案:

答案 0 :(得分:1)

我无法使用Spark和上面列出的spark-redshift repo中的代码完成此操作。

但是,我能够使用AWS Lambda和psycopg2来截断红移表。然后我使用boto3通过AWS Glue开始我的火花工作。

以下重要代码是cur.execute(“truncate table yourschema.yourtable”)

from __future__ import print_function
import sys
import psycopg2
import boto3

def lambda_handler(event, context):
    db_database = "your_redshift_db_name"
    db_user = "your_user_name"
    db_password = "your_password"
    db_port = "5439"
    db_host = "your_redshift.hostname.us-west-2.redshift.amazonaws.com"

    try:
        print("attempting to connect...")
        conn = psycopg2.connect(dbname=db_database, user=db_user, password=db_password, host=db_host, port=db_port)
        print("connected...")
        conn.autocommit = True
        cur = conn.cursor()
        count_sql = "select count(pivotid) from yourschema.yourtable"
        cur.execute(count_sql)
        results = cur.fetchone()

        print("countBefore: ", results[0])
        countOfPivots = results[0]
        if countOfPivots > 0:
            cur.execute("truncate table yourschema.yourtable")
            print("truncated yourschema.yourtable")
            cur.execute(count_sql)
            results = cur.fetchone()
            print("countAfter: ", results[0])

        cur.close()
        conn.close()

        glueClient = boto3.client("glue")
        startTriiggerResponse = glueClient.start_trigger(Name="your-awsglue-ondemand-trigger")
        print("startedTrigger:", startTriiggerResponse.Name)

        return results
    except Exception as e:
        print(e)
        raise e

答案 1 :(得分:0)

在调用save之前,您需要为库指定mode。例如:

my_dataframe.write
   .format("com.databricks.spark.redshift")
   .option("url", "jdbc:redshift://my_cluster.qwertyuiop.eu-west-1.redshift.amazonaws.com:5439/my_database?user=my_user&password=my_password")
   .option("dbtable", "my_table")
   .option("tempdir", "s3://my-bucket")
   .option("diststyle", "KEY")
   .option("distkey", "dist_key")
   .option("sortkeyspec", "COMPOUND SORTKEY(key_1, key_2)")
   .option("extracopyoptions", "TRUNCATECOLUMNS COMPUPDATE OFF STATUPDATE OFF")
   .mode("overwrite") // "append" / "error"
   .save()