导入Pyspark Delta Lake模块时,找不到模块错误

时间:2020-06-11 14:12:48

标签: apache-spark pyspark spark-structured-streaming delta-lake

我正在用三角洲湖泊运行Pyspark,但是当我尝试导入三角洲模块时,我得到了ModuleNotFoundError: No module named 'delta'。这是在没有互联网连接的计算机上,因此我不得不从Maven手动下载delta-core jar,并将其放入%SPARK_HOME%/jars文件夹中。

我的程序可以正常工作,并且能够在三角洲湖中读写,所以很高兴我有了正确的jar。但是,当我尝试导入增量模块from delta.tables import *时,我得到了错误。

有关信息,我的代码是:

import os
from pyspark.sql import SparkSession
from pyspark.sql.types import TimestampType, FloatType, StructType, StructField
from pyspark.sql.functions import input_file_name
from Constants import Constants

if __name__ == "__main__":
    constants = Constants()
    spark = SparkSession.builder.master("local[*]")\
                                .appName("Delta Lake Testing")\
                                .getOrCreate()

    # have to start spark session before importing: https://docs.delta.io/latest/quick-start.html#python
    from delta.tables import *

    # set logging level to limit output
    spark.sparkContext.setLogLevel("ERROR")

    spark.conf.set("spark.sql.session.timeZone", "UTC")
    # push additional python files to the worker nodes
    base_path = os.path.abspath(os.path.dirname(__file__))
    spark.sparkContext.addPyFile(os.path.join(base_path, 'Constants.py'))

    # start pipeline
    schema = StructType([StructField("Timestamp", TimestampType(), False),\
                        StructField("ParamOne", FloatType(), False),\
                        StructField("ParamTwo", FloatType(), False),\
                        StructField("ParamThree", FloatType(), False)])

    df = spark.readStream\
               .option("header", "true")\
               .option("timestampFormat", "yyyy-MM-dd HH:mm:ss")\
               .schema(schema)\
               .csv(constants.input_path)\
               .withColumn("input_file_name", input_file_name())

     df.writeStream\
       .format("delta")\
       .outputMode("append")\
       .option("checkpointLocation", constants.checkpoint_location)\
       .start("/tmp/bronze")

    # await on stream
    sqm = spark.streams
    sqm.awaitAnyTermination()

这是使用Spark v2.4.4和Python v3.6.1,并且作业是使用spark-submit path/to/job.py

提交的

1 个答案:

答案 0 :(得分:2)

%pyspark
sc.addPyFile("**LOCATION_OF_DELTA_LAKE_JAR_FILE**")
from delta.tables import *
相关问题