AWS Glue:java.lang.UnsupportedOperationException:CSV数据源不支持二进制数据类型

时间:2019-04-23 05:22:54

标签: apache-spark pyspark databricks aws-glue

我正在尝试使用aws gluedatabrickspreactionspostactions实现upsert,这是下面的代码

sample_dataframe.write.format("com.databricks.spark.redshift")\
  .option("url", "jdbc:redshift://staging-db.asdf.ap-southeast-1.redshift.amazonaws.com:5439/stagingdb?user=sample&password=pwd")\
  .option("preactions", PRE_ACTION)\
  .option("postactions", POST_ACTION)\
  .option("dbtable", temporary_table)\
  .option("extracopyoptions", "region 'ap-southeast-1'")\
  .option("aws_iam_role", "arn:aws:iam::1234:role/AWSService-Role-ForRedshift-etl-s3")\
  .option("tempdir", args["TempDir"])\
  .mode("append")\
  .save()

我遇到以下错误

py4j.protocol.Py4JJavaError: An error occurred while calling o90.save.
: java.lang.UnsupportedOperationException: CSV data source does not support binary data type.
at org.apache.spark.sql.execution.datasources.csv.CSVUtils$.org$apache$spark$sql$execution$datasources$csv$CSVUtils$$verifyType$1(CSVUtils.scala:127)
at org.apache.spark.sql.execution.datasources.csv.CSVUtils$$anonfun$verifySchema$1.apply(CSVUtils.scala:131)
at org.apache.spark.sql.execution.datasources.csv.CSVUtils$$anonfun$verifySchema$1.apply(CSVUtils.scala:131)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)

也许我错过了一些事情。请帮助TIA。

我还尝试过将preactionspostactions作为connection_options传递(如下),这似乎也不起作用

redshift_datasink = glueContext.write_dynamic_frame_from_jdbc_conf(frame = sample_dyn_frame, catalog_connection='Staging' , connection_options = connect_options, redshift_tmp_dir = args["TempDir"], transformation_ctx = "redshift_datasink")

0 个答案:

没有答案