使用pyspark从文本文件读取数据时出现“无效语法”错误

时间:2019-07-29 05:40:34

标签: pyspark syntax-error datareader

我正在尝试使用pyspark读取文本文件。文件中的数据以逗号分隔。

我已经尝试使用sqlcontext读取数据。

from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark.sql.functions import *

sc = SparkContext._active_spark_context

filePath = './data_files/data.txt'

sqlContext = SQLContext(sc)

print(fileData)
schema = StructType([StructField('ID', IntegerType(), False),
                     StructField('Name', StringType(), False),
                     StructField('Project', StringType(), False),
                     StructField('Location', StringType(), False)])
print(schema)

fileRdd = sc.textFile(fileData).map(_.split(",")).map{x => org.apache.spark.sql.Row(x:_*)}
sqlDf = sqlContext.createDataFrame(fileRdd,schema)
sqlDf.show()

我遇到以下错误。

  

文件“”,第1行       fileRdd = sc.textFile(fileData).map( .split(“,”))。map {x => org.apache.spark.sql.Row(x: *)}                                                            ^ SyntaxError:语法无效

1 个答案:

答案 0 :(得分:0)

我尝试使用下面的代码,它工作正常。

from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark.sql.functions import *


sc = SparkContext._active_spark_context
sc = SparkContext("local", "first app")
sqlContext = SQLContext(sc)

filePath = "./data_files/data.txt"

# Load a text file and convert each line to a Row.
lines = sc.textFile(filePath)
parts = lines.map(lambda l: l.split(","))
# Each line is converted to a tuple.
people = parts.map(lambda p: (p[0].strip(), p[1], p[2], p[3]))

# The schema is encoded in a string.
schemaString = "ID Name Project Location"

fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()]
schema = StructType(fields)

schemaPeople = sqlContext.createDataFrame(people, schema)
schemaPeople.show()