为什么pyspark sql没有返回值

时间:2015-08-27 18:14:27

标签: python apache-spark apache-spark-sql pyspark pyspark-sql

我创建了一个我正在尝试查询的spark RDD表,但结果不是预期的值。不知道出了什么问题。

In [8]:people.take(15)
Out[8]:
[Row(num1=u'27477.23', num2=u'28759.862564'),
 Row(num1=u'14595.27', num2=u'4753.822798'),
 Row(num1=u'16799.17', num2=u'535.51891148'),
 Row(num1=u'171.85602', num2=u'905.14'),
 Row(num1=u'878488.70139', num2=u'1064731.4136'),
 Row(num1=u'1014.59748', num2=u'1105.91'),
 Row(num1=u'184.53171', num2=u'2415.61'),
 Row(num1=u'28113.931963', num2=u'71011.376036'),
 Row(num1=u'1471.75', num2=u'38.0268375'),
 Row(num1=u'33645.52', num2=u'15341.160558'),
 Row(num1=u'5464.95822', num2=u'14457.08'),
 Row(num1=u'753.58258673', num2=u'3243.75'),
 Row(num1=u'26469.395374', num2=u'38398.135846'),
 Row(num1=u'4709.5768681', num2=u'1554.61'),
 Row(num1=u'1593.1114983', num2=u'2786.4538546')]

架构以字符串形式编码。

In [9]:
schemaString = "num1 num2"
In [10]:

fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()]
schema = StructType(fields)
In [11]:

# Apply the schema to the RDD
schemaPeople = sqlContext.applySchema(people, schema)

将SchemaRDD注册为表格。

In [12]:
schemaPeople.registerTempTable("people")

SQL可以在已注册为表的SchemaRDD上运行。**

In [14]:
results = sqlContext.sql("SELECT sum(num1) FROM people")
In [18]:
results
Out[18]:
MapPartitionsRDD[52] at mapPartitions at SerDeUtil.scala:143

1 个答案:

答案 0 :(得分:2)

与普通RDD上的transformations相同Spark SQL查询仅是对所需操作的描述。如果您想获得结果,则触发action

>>> results.first()
Row(_c0=1040953.1831101299)

为了清楚起见,最好是投射数据而不依赖于隐式转换:

>>> result = sqlContext.sql("SELECT SUM(CAST(num1 AS FLOAT)) FROM people")