似乎他们都返回了一个新的DataFrame
源代码:
def toDF(self, *cols):
jdf = self._jdf.toDF(self._jseq(cols))
return DataFrame(jdf, self.sql_ctx)
def select(self, *cols):
jdf = self._jdf.select(self._jcols(*cols))
return DataFrame(jdf, self.sql_ctx)
答案 0 :(得分:3)
差异很微妙。
例如,如果您使用("Pete", 22)
将未命名的元组.toDF("name", "age")
转换为DataFrame,您还可以通过再次调用toDF
方法重命名数据框。例如:
scala> val rdd = sc.parallelize(List(("Piter", 22), ("Gurbe", 27)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[2] at parallelize at <console>:27
scala> val df = rdd.toDF("name", "age")
df: org.apache.spark.sql.DataFrame = [name: string, age: int]
scala> df.show()
+-----+---+
| name|age|
+-----+---+
|Piter| 22|
|Gurbe| 27|
+-----+---+
scala> val df = rdd.toDF("person", "age")
df: org.apache.spark.sql.DataFrame = [person: string, age: int]
scala> df.show()
+------+---+
|person|age|
+------+---+
| Piter| 22|
| Gurbe| 27|
+------+---+
使用select您可以选择列,以后可以用来投影表,或者只保存您需要的列:
scala> df.select("age").show()
+---+
|age|
+---+
| 22|
| 27|
+---+
scala> df.select("age").write.save("/tmp/ages.parquet")
Scaling row group sizes to 88.37% for 8 writers.
希望这有帮助!