我正在读取具有以下架构的镶木地板文件:
df.printSchema()
root
|-- time: integer (nullable = true)
|-- amountRange: integer (nullable = true)
|-- label: integer (nullable = true)
|-- pcaVector: vector (nullable = true)
现在,我想测试Pyspark结构化的流,并且我想使用相同的镶木地板文件。我能够创建的最接近的架构是使用ArrayType,但是它不起作用:
schema = StructType(
[
StructField('time', IntegerType()),
StructField('amountRange', IntegerType()),
StructField('label', IntegerType()),
StructField('pcaVector', ArrayType(FloatType()))
]
)
df_stream = spark.readStream\
.format("parquet")\
.schema(schema)\
.load("/home/user/test_arch/data/fraud/")
Caused by: java.lang.ClassCastException: Expected instance of group converter but got "org.apache.spark.sql.execution.datasources.parquet.ParquetPrimitiveConverter"
at org.apache.parquet.io.api.Converter.asGroupConverter(Converter.java:37)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter$RepeatedGroupConverter.<init>(ParquetRowConverter.scala:659)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.org$apache$spark$sql$execution$datasources$parquet$ParquetRowConverter$$newConverter(ParquetRowConverter.scala:308)
如何使用VectorType创建一个仅在Scala中存在的Pyspark的StructType模式?
答案 0 :(得分:3)
类型为VectorUDT
from pyspark.ml.linalg import VectorUDT
StructField('pcaVector', VectorUDT())