建立数据框

时间:2019-06-19 05:50:22

标签: scala apache-spark dataframe

我试图建立一个包含10k条记录的数据框,然后保存到Spark 2.4.3独立版上的拼花文件中 以下内容可小规模处理,最多可记录1000条记录,但当增加到10k时,将永久删除

scala> import spark.implicits._
import spark.implicits._

scala> var someDF = Seq((0, "item0")).toDF("x", "y")
someDF: org.apache.spark.sql.DataFrame = [x: int, y: string]

scala> for ( i <- 1 to 1000 ) {someDF = someDF.union(Seq((i,"item"+i)).toDF("x", "y")) }

scala>   someDF.show
+---+------+
|  x|     y|
+---+------+
|  0| item0|
|  1| item1|
|  2| item2|
|  3| item3|
|  4| item4|
|  5| item5|
|  6| item6|
|  7| item7|
|  8| item8|
|  9| item9|
| 10|item10|
| 11|item11|
| 12|item12|
| 13|item13|
| 14|item14|
| 15|item15|
| 16|item16|
| 17|item17|
| 18|item18|
| 19|item19|
+---+------+
only showing top 20 rows


[Stage 2:=========================================================(20 + 0) / 20]

scala> var someDF = Seq((0, "item0")).toDF("x", "y")
someDF: org.apache.spark.sql.DataFrame = [x: int, y: string]

scala>   someDF.show
+---+-----+
|  x|    y|
+---+-----+
|  0|item0|
+---+-----+


scala> for ( i <- 1 to 10000 ) {someDF = someDF.union(Seq((i,"item"+i)).toDF("x", "y")) }

只想将someDF保存到一个实木复合地板文件中,然后加载到Impala中

1 个答案:

答案 0 :(得分:2)

//declare Range that you want
scala> val r = 1 to 10000

//create DataFrame with range 
scala> val df  = sc.parallelize(r).toDF("x")

//Add new column "y"
scala> val final_df = df.select(col("x"),concat(lit("item"),col("x")).alias("y"))
scala> final_df.show

+---+------+
|  x|     y|
+---+------+
|  1| item1|
|  2| item2|
|  3| item3|
|  4| item4|
|  5| item5|
|  6| item6|
|  7| item7|
|  8| item8|
|  9| item9|
| 10|item10|
| 11|item11|
| 12|item12|
| 13|item13|
| 14|item14|
| 15|item15|
| 16|item16|
| 17|item17|
| 18|item18|
| 19|item19|
| 20|item20|
+---+------+

scala> final_df.count
res17: Long = 10000

//Write final_df to path in parquet format
scala> final_df.write.format("parquet").save(<path to write>)