从推文中删除网址 - UnicodeEncodeError:' ascii'编解码器无法对字符进行编码

时间:2017-04-12 14:09:22

标签: python apache-spark pyspark

我尝试使用pyspark从推文数据集中删除网址,但我收到以下错误:

  

UnicodeEncodeError:' ascii'编解码器不能对字符u' \ xe3'进行编码。位置58:序数不在范围内(128)

从csv文件导入数据框:

tweetImport=spark.read.format('com.databricks.spark.csv')\
                    .option('delimiter', ';')\
                    .option('header', 'true')\
                    .option('charset', 'utf-8')\
                    .load('./output_got.csv')

从推文中删除网址:

from pyspark.sql.types import StringType
from pyspark.sql.functions import udf

normalizeTextUDF=udf(lambda text: re.sub(r"(\w+:\/\/\S+)", \
              ":url:",  str(text).encode('ascii','ignore')), \        
              StringType())

tweetsNormalized=tweetImport.select(normalizeTextUDF(\
              lower(tweetImport.text)).alias('text'))
tweetsNormalized.show()

已经尝试过:

normalizeTextUDF=udf(lambda text: re.sub(r"(\w+:\/\/\S+)", \
              ":url:",  str(text).encode('utf-8')), \        
              StringType())

normalizeTextUDF=udf(lambda text: re.sub(r"(\w+:\/\/\S+)", \
              ":url:",  unicode(str(text), 'utf-8')), \        
              StringType())

没有工作

------------编辑--------------

回溯:

Py4JJavaError: An error occurred while calling o581.showString. :org.apache.spark.SparkException: Job aborted due to stage failure:
Task 0 in stage 10.0 failed 1 times, most recent failure: Lost task
0.0 in stage 10.0 (TID 10, localhost, executor driver): org.apache.spark.api.python.PythonException:
Traceback (most recent call last):
  File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 174, in main
    process()
  File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 169, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 106, in <lambda>
    func = lambda _, it: map(mapper, it)
  File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 92, in <lambda>
    mapper = lambda a: udf(*a)
  File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 70, in <lambda>
    return lambda *a: f(*a)
  File "<stdin>", line 3, in <lambda>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe3' in position 58: ordinal not in range(128)

2 个答案:

答案 0 :(得分:1)

我找到了一种方法,通过首先删除ponctuation,使用以下函数来完成我需要的工作:

import string
import unicodedata
from pyspark.sql.functions import *

def normalizeData(text):
    replace_punctuation = string.maketrans(string.punctuation, ' '*len(string.punctuation))
    nfkd_form=unicodedata.normalize('NFKD', unicode(text))
    dataContent=nfkd_form.encode('ASCII', 'ignore').translate(replace_punctuation)
    dataContentSingleLine=' '.join(dataContent.split())

return dataContentSingleLine

udfNormalizeData=udf(lambda text: normalizeData(text))
tweetsNorm=tweetImport.select(tweetImport.date,udfNormalizeData(lower(tweetImport.text)).alias('text'))

答案 1 :(得分:0)

首先尝试解码文字:

str(text).decode('utf-8-sig')

然后运行encode:

str(text).encode('utf-8')