如何在pyspark数据帧上使用forEachPartition?

时间:2020-09-09 07:10:55

标签: pyspark rdd

我试图在具有8个分区的RDD上使用pyspark使用forEachPartition()方法。我的自定义函数尝试为给定的字符串输入生成字符串输出。这是代码

from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
import pandas as pd
import datetime

def compute_sentiment_score(text):
    client = language.LanguageServiceClient()
    document = types.Document(content=text,type=enums.Document.Type.PLAIN_TEXT, language='en')
    sentiment = client.analyze_sentiment(document=document).document_sentiment
    return str(sentiment.score)

def compute_sentiment_magnitude(text):
    client = language.LanguageServiceClient()
    document = types.Document(content=text,type=enums.Document.Type.PLAIN_TEXT, language='en')
    sentiment = client.analyze_sentiment(document=document).document_sentiment
    return str(sentiment.magnitude)

import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="/path-to-file.json"

imdb_reviews = pd.read_csv('imdb_reviews.csv', header=None, names=['input1', 'input2'], encoding= "ISO-8859-1")

imdb_reviews.head()

    input1                                         input2
0   first think another Disney movie, might good, ...   1
1   Put aside Dr. House repeat missed, Desperate H...   0
2   big fan Stephen King's work, film made even gr...   1
3   watched horrid thing TV. Needless say one movi...   0
4   truly enjoyed film. acting terrific plot. Jeff...   1


spark_imdb_reviews = spark.createDataFrame(imdb_reviews) # create spark dataframe


spark_imdb_reviews.printSchema()
root
 |-- input1: string (nullable = true)
 |-- input2: long (nullable = true)

这是我的自定义功能-

def compute_sentiment_score(text):
    client = language.LanguageServiceClient()
    document = types.Document(content=text,type=enums.Document.Type.PLAIN_TEXT, language='en')
    sentiment = client.analyze_sentiment(document=document).document_sentiment
    return str(sentiment.score)

def compute_sentiment_magnitude(text):
    client = language.LanguageServiceClient()
    document = types.Document(content=text,type=enums.Document.Type.PLAIN_TEXT, language='en')
    sentiment = client.analyze_sentiment(document=document).document_sentiment
    return str(sentiment.magnitude)

这是我尝试使用forEachPartition()方法的方式-

create_rdd = spark_imdb_reviews.select("input1").rdd # create RDD
print(create_rdd.getNumPartitions()) # print the partitions
print(create_rdd.take(1)) # display data
new_rdd = create_rdd.foreachPartition(compute_sentiment_score) # compute score

哪个给出此输出和一个错误-

8
[Row(input1="first think another Disney movie, might good, it's kids movie. watch it, can't help enjoy it. ages love movie. first saw movie 10 8 years later still love it! Danny Glover superb could play part better. Christopher Lloyd hilarious perfect part. Tony Danza believable Mel Clark. can't help, enjoy movie! give 10/10!")]

File "<ipython-input-106-e3fd65ce75cc>", line 3, in compute_sentiment_score
TypeError: <itertools.chain object at 0x11ab7f198> has type itertools.chain, but expected one of: bytes, unicode

1 个答案:

答案 0 :(得分:1)

有两个类似的功能:

两个函数都希望另一个函数作为参数(此处为compute_sentiment_score)。此函数获取以迭代器形式传递的分区的内容。问题中的text参数实际上是一个迭代器,可以在compute_sentiment_score内部使用。

foreachPartitionmapPartition之间的区别在于foreachPartition是Spark动作,而mapPartition是转换。这意味着foreachPartition所调用的代码将立即执行,并且RDD保持不变,而mapPartition可用于创建新的RDD。为了存储计算出的情感分数mapPartitions

def compute_sentiment_score(itr_text):
    #setup the things that are expensive and should be prepared only once per partition
    client = language.LanguageServiceClient()
    
    #run the loop for each row of the partition
    for text in itr_text:
        document = types.Document(content=text.value,type=enums.Document.Type.PLAIN_TEXT, language='en')
        sentiment = client.analyze_sentiment(document=document).document_sentiment
        yield (text.value, sentiment.score)

df_with_score = df.rdd.mapPartitions(compute_sentiment_score)
df_with_score.foreach(print)

在此示例中,client = language.LanguageServiceClient()每个分区被调用一次。可能必须减少分区数量,例如使用coalesce