我已经配置了一个AWS Glue dev端点,可以在pyspark REPL shell中成功连接 - 就像这样https://docs.aws.amazon.com/glue/latest/dg/dev-endpoint-tutorial-repl.html
与AWS文档中给出的示例不同,我在开始会话时收到WARNings,稍后在AWS Glue DynamicFrame结构上的各种操作失败。这里是开始会话的完整日志 - 请注意有关spark.yarn.jars和PyGlue.zip的错误:
Python 2.7.12 (default, Sep 1 2016, 22:14:00)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/share/aws/glue/etl/jars/glue-assembly.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/03/02 14:18:58 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
18/03/02 14:19:03 WARN Client: Same path resource file:/usr/share/aws/glue/etl/python/PyGlue.zip added multiple times to distributed cache.
18/03/02 14:19:13 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 2.1.0
/_/
Using Python version 2.7.12 (default, Sep 1 2016 22:14:00)
SparkSession available as 'spark'.
>>>
许多操作都按照我的预期工作,但我也收到了一些不受欢迎的例外情况,例如我可以从我的胶水目录中加载数据检查其结构和数据,但我无法应用Map或转换它到DF。这是我的完整执行运行日志(除了最长的错误消息)。前几个命令和设置都运行良好,但最后两个操作失败:
>>> import sys
>>> from awsglue.transforms import *
>>> from awsglue.utils import getResolvedOptions
>>> from pyspark.context import SparkContext
>>> from awsglue.context import GlueContext
>>> from awsglue.job import Job
>>>
>>> glueContext = GlueContext(spark)
>>> # Receives a string of the format yyyy-mm-dd hh:mi:ss.nnn and returns the first 10 characters: yyyy-mm-dd
... def TruncateTimestampString(ts):
... ts = ts[:10]
... return ts
...
>>> TruncateTimestampString('2017-03-05 06:12:08.376')
'2017-03-05'
>>>
>>> # Given a record with a timestamp property returns a record with a new property, day, containing just the date portion of the timestamp string, expected to be yyyy-mm-dd.
... def TruncateTimestamp(rec):
... rec[day] = TruncateTimestampString(rec[timestamp])
... return rec
...
>>> # Get the history datasource - WORKS WELL BUT LOGS log4j2 ERROR
>>> datasource_history_1 = glueContext.create_dynamic_frame.from_catalog(database = "dev", table_name = "history", transformation_ctx = "datasource_history_1")
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
>>> # Tidy the history datasource - WORKS WELL
>>> history_tidied = datasource_history_1.drop_fields(['etag', 'jobmaxid', 'jobminid', 'filename']).rename_field('id', 'history_id')
>>> history_tidied.printSchema()
root
|-- jobid: string
|-- spiderid: long
|-- timestamp: string
|-- history_id: long
>>> # Trivial observation of the SparkSession objects
>>> SparkSession
<class 'pyspark.sql.session.SparkSession'>
>>> spark
<pyspark.sql.session.SparkSession object at 0x7f8668f3b650>
>>>
>>>
>>> # Apply a mapping to the tidied history datasource. FAILS
>>> history_mapped = history_tidied.map(TruncateTimestamp)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/tmp/spark-1f0341db-5de6-4008-974f-a1d194524a86/userFiles-6a67bdee-7c44-46d6-a0dc-9daa7177e7e2/PyGlue.zip/awsglue/dynamicframe.py", line 101, in map
File "/mnt/tmp/spark-1f0341db-5de6-4008-974f-a1d194524a86/userFiles-6a67bdee-7c44-46d6-a0dc-9daa7177e7e2/PyGlue.zip/awsglue/dynamicframe.py", line 105, in mapPartitionsWithIndex
File "/usr/lib/spark/python/pyspark/rdd.py", line 2419, in __init__
self._jrdd_deserializer = self.ctx.serializer
AttributeError: 'SparkSession' object has no attribute 'serializer'
>>> history_tidied.toDF()
ERROR
Huge error log and stack trace follows, longer than my console can remember. Here's how it finishes:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/tmp/spark-1f0341db-5de6-4008-974f-a1d194524a86/userFiles-6a67bdee-7c44-46d6-a0dc-9daa7177e7e2/PyGlue.zip/awsglue/dynamicframe.py", line 128, in toDF
File "/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/usr/lib/spark/python/pyspark/sql/utils.py", line 79, in deco
raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState':"
我想我遵循亚马逊在他们的Dev Endpoint REPL指令中给出的指示,但是由于这些相当简单的操作(DynamicFrame.join和DynamicFrame.toDF)失败,我在黑暗中工作时想要真正运行该作业(这似乎成功,但我的DynamicFrame.printSchema()和DynamicFrame.show()命令不会显示在CloudWatch日志中以供执行)。
有谁知道我需要做些什么才能修复我的REPL环境,这样才能正确测试pyspark AWS Glue脚本?
答案 0 :(得分:0)
在Windows或Unix环境中配置开发端点,看起来下载的文件没有复制到linux的正确位置,或者代码片段无法找到jar的路径。请确保文件在那里。
我已经在我的Windows机器上安装了Zeppelin笔记本,并且能够成功连接到胶水目录。也许你也可以尝试一下,如果你需要任何帮助,请告诉我。
答案 1 :(得分:0)
AWS Support最终回复了我对此问题的疑问。以下是回复:
在进一步研究时,我发现这是PySpark shell和胶水服务团队已经存在的已知问题。该修复程序应尽快部署,但目前还没有可以与您分享的ETA。
同时这是一个解决方法:在初始化Glue上下文之前,你可以做到
>> newconf = sc._conf.set("spark.sql.catalogImplementation", "in-memory") >> sc.stop() >> sc = sc.getOrCreate(newconf)
然后从该sc实例化glueContext。
我可以确认这对我有用,这是我能够运行的脚本:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
# New recommendation from AWS Support 2018-03-22
newconf = sc._conf.set("spark.sql.catalogImplementation", "in-memory")
sc.stop()
sc = sc.getOrCreate(newconf)
# End AWS Support Workaround
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
datasource_history_1 = glueContext.create_dynamic_frame.from_catalog(database = "dev", table_name = "history", transformation_ctx = "datasource_history_1")
def DoNothingMap(rec):
return rec
history_mapped = datasource_history_1.map(DoNothingMap)
history_df = history_mapped.toDF()
history_df.show()
history_df.printSchema()
以前,.map()
和.toDF()
来电都会失败。
我已经要求AWS Support在此问题得到解决后通知我,以便不再需要解决方法。