我正在尝试使用Pyspark读取csv文件。 Csv-File具有一些元信息和数据列,它们具有不同的列号和结构。
Excel可以轻松读取此文件。 我想在spark中定义一个自定义模式以读取此文件。 这是一个示例:
HEADER_TAG\tHEADER_VALUE
FORMAT\t2.00
NUMBER_PASSES\t0001
"Time"\t"Name"\t"Country"\t"City"\t"Street"\t"Phone1"\t"Phone2"
0.49tName1\tUSA\tNewYork\t5th Avenue\t123456\t+001236273
0.5tName2\tUSA\tWashington\t524 Street\t222222\t+0012222
0.62tName3\tGermany\tBerlin\tLinden Strasse\t3434343\t+491343434
NUM_DATA_ROWS\t3
NUM_DATA_COLUMNS\t7
START_TIME_FORMAT\tMM/dd/yyyy HH:mm:ss
START_TIME\t06/04/2019 13:04:23
END_HEADER
如果没有预定义的架构,则只读2列:
df_static = spark.read.options(header='false', inferschema='true', multiLine=True, delimiter = "\t",mode="PERMISSIVE",).csv("/FileStore/111.txt")
root
|-- _c0: string (nullable = true)
|-- _c1: string (nullable = true)
答案 0 :(得分:0)
定义自定义架构:
from pyspark.sql.types import *
# define it as per your data types
user_schema = StructType([
... StructField("time", TimestampType(), True),
... StructField("name", StringType(), True),
... StructField("Country", StringType(), True),
... StructField("City", StringType(), True),
... StructField("Phone1", StringType(), True),
... StructField("Phone2", StringType(), True),])
引用:https://spark.apache.org/docs/2.1.2/api/python/_modules/pyspark/sql/types.html
df_static = spark.read.schema(user_schema).options(header='false', multiLine=True, delimiter = "\t", mode="PERMISSIVE").csv("/FileStore/111.txt")
答案 1 :(得分:-1)
我有如下多个模式
user_schema1 = StructType([
... StructField("time", TimestampType(), True),
... StructField("name", StringType(), True),
... StructField("Country", StringType(), True),...
... ])
user_schema2 = StructType([...
... StructField("Phone1", StringType(), True),
... StructField("Phone2", StringType(), True),])
df_static = spark.read.schema(user_schema(send schema name dynamic)).options(header='false', multiLine=True, delimiter = "\t", mode="PERMISSIVE").csv("/FileStore/111.txt")`enter code here`
Kindly provide me the solution