雪花连接器错误ProgrammingError 100078(22000)

时间:2020-01-24 05:55:40

标签: python-3.x snowflake-cloud-data-platform

当我尝试使用python脚本将数据从s3加载到雪花时,出现以下错误,

String '$METADATA$FILENAME' is too long and would be truncated
  File '#######', line 1, character 1
  Row 1, column $METADATA$FILENAME

我正在尝试将原始文件名存储在表中。为此,我使用了$METADATA$FILENAME关键字。在表格中,此列使用VARCHAR(16777216)数据类型定义全长。 有什么办法解决这个问题

1 个答案:

答案 0 :(得分:0)

Here is the python script . Next time onwards please share your code too.

-------------------------------------------------------------------------------

#!/usr/bin/env python
# coding=utf-8
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark import SparkConf, SparkContext
import subprocess
from pyspark.sql import SparkSession
import logging
from logging import getLogger


spark = SparkSession.builder.appName("my_app").config('spark.sql.codegen.wholeStage', False).getOrCreate()

sc=spark.sparkContext
hadoop_conf=sc._jsc.hadoopConfiguration()
hadoop_conf.set("fs.s3n.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
hadoop_conf.set("fs.s3n.awsAccessKeyId", " ") # Fill your values here
hadoop_conf.set("fs.s3n.awsSecretAccessKey", "")  # Fill your values here

logging.basicConfig(
        filename=v_log,
        level=logging.DEBUG)
logger = getLogger(__name__)

sfOptions = {
    "sfURL": "sfcsupport.snowflakecomputing.com",
    "sfAccount": "", # Fill your values here
    "sfUser": "", # Fill your values here
    "sfPassword": "", # Fill your values here
    "sfDatabase": "", # Fill your values here
    "sfSchema": "PUBLIC",
    "sfWarehouse": "", # Fill your values here
    "sfRole": "", # Fill your values here
    "parallelism": "64",
    "awsAccessKey": hadoop_conf.get("fs.s3n.awsAccessKeyId"),
    "awsSecretKey": hadoop_conf.get("fs.s3n.awsSecretAccessKey"),
    "tempdir": "s3n://<pathtofile>"  # Fill your values here
}




SNOWFLAKE_SOURCE_NAME = "net.snowflake.spark.snowflake"


df = spark.read.option("delimiter", ",").csv(
    "s3n://<pathtofile>", header=False)  # Fill your values here
df.show()
----------------------------------------------------------------------------