以下代码正在运行,并从文本文件创建Spark数据帧。但是,我正在尝试使用header选项将第一列用作标题,由于某种原因它似乎没有发生。我不明白为什么!它一定是蠢事但我无法解决这个问题。
>>>from pyspark.sql import SparkSession
>>>spark = SparkSession.builder.master("local").appName("Word Count")\
.config("spark.some.config.option", "some-value")\
.getOrCreate()
>>>df = spark.read.option("header", "true")\
.option("delimiter", ",")\
.option("inferSchema", "true")\
.text("StockData/ETFs/aadr.us.txt")
>>>df.take(3)
返回以下内容:
[行(值= u'Date,打开,高,低,关闭,音量,OpenInt'), 行(值= u'2010-07-21,24.333,24.333,23.946,23.946,43321,0' ), 行(值= u'2010-07-22,24.644,24.644,24.362,24.487,18031,0' )]
>>>df.columns
返回以下内容:
[ '值']
答案 0 :(得分:3)
<强>问题强>
问题是您使用.text
api来代替.csv
或.load
。如果您阅读 .text api文档,则说明
def text(self, paths): """Loads text files and returns a :class:DataFrame whose schema starts with a string column named "value", and followed by partitioned columns if there are any. Each line in the text file is a new row in the resulting DataFrame. :param paths: string, or list of strings, for input path(s). df = spark.read.text('python/test_support/sql/text-test.txt') df.collect() [Row(value=u'hello'), Row(value=u'this')] """
使用.csv的解决方案
将.text
函数调用更改为.csv
,您应该没问题
df = spark.read.option("header", "true") \
.option("delimiter", ",") \
.option("inferSchema", "true") \
.csv("StockData/ETFs/aadr.us.txt")
df.show(2, truncate=False)
应该给你
+-------------------+------+------+------+------+------+-------+
|Date |Open |High |Low |Close |Volume|OpenInt|
+-------------------+------+------+------+------+------+-------+
|2010-07-21 00:00:00|24.333|24.333|23.946|23.946|43321 |0 |
|2010-07-22 00:00:00|24.644|24.644|24.362|24.487|18031 |0 |
+-------------------+------+------+------+------+------+-------+
使用.load的解决方案
如果未定义格式选项, .load
将假设文件为镶木地板格式。所以你还需要定义一个格式选项
df = spark.read\
.format("com.databricks.spark.csv")\
.option("header", "true") \
.option("delimiter", ",") \
.option("inferSchema", "true") \
.load("StockData/ETFs/aadr.us.txt")
df.show(2, truncate=False)
我希望答案很有帮助
答案 1 :(得分:0)
尝试以下操作:
from pyspark.sql import SparkSession
spark=SparkSession.builder.appName('CaseStudy').getOrCreate()
df = spark.read.format("csv").option("header", "true").option("delimiter", "|").option("inferSchema", "true").load("file name")
df.show()