使用pandas_udf将Spark结构化DataFrame转换为Pandas

时间:2019-05-20 08:16:40

标签: apache-spark pyspark spark-streaming

我需要将csv文件作为流读取,然后将其转换为pandas dataframe

这是我到目前为止所做的


    DataShema = StructType([ StructField("TimeStamp", LongType(), True), \
    StructField("Count", IntegerType(), True), \
    StructField("Reading", FloatType(), True) ])

    group_columns = ['TimeStamp','Count','Reading']

    @pandas_udf(DataShema, PandasUDFType.GROUPED_MAP)
    def get_pdf(pdf):
        return pd.DataFrame([pdf[group_columns]],columns=[group_columns])

    # getting Surge data from the files
    SrgDF = spark \
        .readStream \
        .schema(DataShema) \
        .csv("ProcessdedData/SurgeAcc")

    mydf = SrgDF.groupby(group_columns).apply(get_pdf)

    qrySrg = SrgDF \
        .writeStream \
        .format("console") \
        .start() \
        .awaitTermination()

我认为,从另一个来源(Convert Spark Structure Streaming DataFrames to Pandas DataFrame)来看,将结构化流数据帧直接转换为熊猫是不可能的,而且pandas_udf似乎是正确的方法,但无法确切地知道如何实现这一点。我需要将pandas数据框传递到函数中。

编辑

当我运行代码(将查询更改为mydf而不是SrgDF)时,出现以下错误:pyspark.sql.utils.StreamingQueryException: 'Writing job aborted.\n=== Streaming Query ===\nIdentifier: [id = 18a15e9e-9762-4464-b6d1-cb2db8d0ac41, runId = e3da131e-00d1-4fed-82fc-65bf377c3f99]\nCurrent Committed Offsets: {}\nCurrent Available Offsets: {FileStreamSource[file:/home/mls5/Work_Research/Codes/Misc/Python/MachineLearning_ArtificialIntelligence/00_Examples/01_ApacheSpark/01_ComfortApp/ProcessdedData/SurgeAcc]: {"logOffset":0}}\n\nCurrent State: ACTIVE\nThread State: RUNNABLE\n\nLogical Plan:\nFlatMapGroupsInPandas [Count#1], get_pdf(TimeStamp#0L, Count#1, Reading#2), [TimeStamp#10L, Count#11, Reading#12]\n+- Project [Count#1, TimeStamp#0L, Count#1, Reading#2]\n +- StreamingExecutionRelation FileStreamSource[file:/home/mls5/Work_Research/Codes/Misc/Python/MachineLearning_ArtificialIntelligence/00_Examples/01_ApacheSpark/01_ComfortApp/ProcessdedData/SurgeAcc], [TimeStamp#0L, Count#1, Reading#2]\n' 19/05/20 18:32:29 ERROR ReceiverTracker: Deregistered receiver for stream 0: Stopped by driver /usr/local/lib/python3.6/dist-packages/pyarrow/__init__.py:152: UserWarning: pyarrow.open_stream is deprecated, please use pyarrow.ipc.open_stream warnings.warn("pyarrow.open_stream is deprecated, please use "

EDIT-2

这是重现错误的代码

import sys

from pyspark import SparkContext
from pyspark.sql import Row, SparkSession, SQLContext
from pyspark.sql.functions import explode
from pyspark.sql.functions import split

from pyspark.streaming import StreamingContext

from pyspark.sql.types import *

import pandas as pd
from pyspark.sql.functions import pandas_udf, PandasUDFType
import pyarrow as pa

import glob

#####################################################################################

if __name__ == '__main__' :

    spark = SparkSession \
        .builder \
        .appName("RealTimeIMUAnalysis") \
        .getOrCreate() 

    spark.conf.set("spark.sql.execution.arrow.enabled", "true")

    # reduce verbosity
    sc = spark.sparkContext
    sc.setLogLevel("ERROR")

    ##############################################################################

    # using the saved files to do the Analysis
    DataShema = StructType([ StructField("TimeStamp", LongType(), True), \
    StructField("Count", IntegerType(), True), \
    StructField("Reading", FloatType(), True) ])

    group_columns = ['TimeStamp','Count','Reading']

    @pandas_udf(DataShema, PandasUDFType.GROUPED_MAP)
    def get_pdf(pdf):
        return pd.DataFrame([pdf[group_columns]],columns=[group_columns])

    # getting Surge data from the files
    SrgDF = spark \
        .readStream \
        .schema(DataShema) \
        .csv("SurgeAcc")

    mydf = SrgDF.groupby('Count').apply(get_pdf)
    #print(mydf)

    qrySrg = mydf \
        .writeStream \
        .format("console") \
        .start() \
        .awaitTermination()

要运行,您需要创建一个名为SurgeAcc的代码所在的文件夹,并在其中以以下格式创建一个csv文件:

TimeStamp,Count,Reading
1557011317299,45148,-0.015494
1557011317299,45153,-0.015963
1557011319511,45201,-0.015494
1557011319511,45221,-0.015494
1557011315134,45092,-0.015494
1557011315135,45107,-0.014085
1557011317299,45158,-0.015963
1557011317299,45163,-0.015494
1557011317299,45168,-0.015024`

1 个答案:

答案 0 :(得分:0)

您的返回pandas_udf数据框与指定的架构不匹配。

请注意,对pandas_udf的输入将是pandas数据框,并且还会返回pandas数据框。

您可以使用pandas_udf内部的所有pandas函数。只需确保ReturnDataShema应该与函数的实际输出匹配即可。

for _, sub in df.groupby('ProdBatchNo'):
    df.loc[sub.index, '%_Vol_allocated'] = sub.Volume / sub.Volume.sum() * 100
    df.loc[sub.index, 'Quantity'] = sub.Batch_Quantity * sub['%_Vol_allocated'] / 100