我正在尝试使用Jupiter Notebook从AWS EC2集群上的HDFS读取数据。它有7个节点。我正在使用HDP 2.4,我的代码如下。该表有数百万行但代码不返回任何行。“ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com”是服务器(ambari-server)。
from pyspark.sql import SQLContext
sqlContext = HiveContext(sc)
demography = sqlContext.read.load("hdfs://ec2-xx-xx-xxx-xx.compute-1.amazonaws.com:8020/tmp/FAERS/demography_2012q4_2016q1_duplicates_removed.csv", format="com.databricks.spark.csv", header="true", inferSchema="true")
demography.printSchema()
demography.cache()
print demography.count()
但是使用sc.textFile,我得到了正确的行数
data = sc.textFile("hdfs://ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com:8020/tmp/FAERS/demography_2012q4_2016q1_duplicates_removed.csv")
schema= data.map(lambda x: x.split(",")).first() #get schema
header = data.first() # extract header
data=data.filter(lambda x:x !=header) # filter out header
data= data.map(lambda x: x.split(","))
data.count()
3641865