我想从XML文件生成模式定义,以为我们的笔记本生成部署后测试。
我已经将XML解析为以下字符串:
StructType([StructField('ItemNumber', StringType(), True),
StructField('UPC', StringType(), True),
StructField('AssignDate', DateType(), True),
StructField('AssignmentQuantity', IntegerType(), True)]
将我的数据输入表格:
[Row(dataRow="'A123456', '12345678900', '12/01/2020', 89"),
Row(dataRow="'B123456','00123456789', 12/02/2018, 1002")]
这是代码:
# create a dataframe from mock test data
def CreateMockInputData(notebook_Name, entity_Name, dataSpec):
schema = CreateEntitySchema(notebook_Name=notebook_Name, dataSpec=dataSpec, entity_Name=entity_Name)
print(schema)
# parse out the data
entityDef = NotebookEntity(notebook_Name=notebook_Name, dataSpec=dataSpec, entity_Name=entity_Name)
data_list = entityDef.selectExpr("explode(data_row) as dataRow").collect()
print()
print(data_list)
entity_data = spark.createDataFrame(data_list, schema)
return entity_data
mock_df = CreateMockInputData(notebook_Name='Test Notebook', dataSpec=df_entityDataDefinitions,
entity_Name='entity_for_data'))
我得到的是以下错误:
ParseException Traceback (most recent call last)
<command-4322020421037787> in <module>()
----> 1 mock_df = CreateMockInputData(notebook_Name = 'Test Notebook', dataSpec = df_entityDataDefinitions, entity_Name = 'entity_for_data')
2 #print(mock_df)
3 mock_df.printSchema()
4 mock_df.show(10, False)
<command-4322020421037786> in CreateMockInputData(notebook_Name, entity_Name, dataSpec)
10 print()
11 print(data_list)
---> 12 entity_data = spark.createDataFrame(data_list, schema)
13 entity_data = entityData_list
14 return entity_data
/databricks/spark/python/pyspark/sql/session.py in createDataFrame(self, data, schema, samplingRatio, verifySchema)
735
736 if isinstance(schema, basestring):
--> 737 schema = _parse_datatype_string(schema)
738 elif isinstance(schema, (list, tuple)):
739 # Must re-encode any unicode strings to be consistent with StructField names
对于我来说,不清楚如何或什么需要“重新编码”才能使模式与我的数据一起使用。
任何建议都会受到欢迎。
答案 0 :(得分:0)
为了转换定义模式的字符串,我发现您需要使用eval语句执行该字符串。
示例:
schema_str = "StructType([StructField('ItemNumber', StringType(), True),
StructField('UPC', StringType(), True),
StructField('AssignDate', DateType(), True),
StructField('AssignmentQuantity', IntegerType(), True)]"
entity_data = spark.createDataFrame(data_list,eval(schema))