如何在spark sql concat中包含双引号?

时间:2017-06-06 16:32:55

标签: apache-spark

我试图用双引号连接两列,在这两列中获取前缀和后缀。代码有效,但它给了我额外的双引号。

输入:

campaign_file_name_1, campaign_name_1, shagdhsjagdhjsagdhrSqpaKa5saoaus89,    1
campaign_file_name_1, campaign_name_1, sagdhsagdhasjkjkasihdklas872hjsdjk,    2

预期输出:

 campaign_file_name_1, shagdhsjagdhjsagdhrSqpaKa5saoaus89,   "campaign_name_1"="1",  2017-06-06 17:09:31
 campaign_file_name_1, sagdhsagdhasjkjkasihdklas872hjsdjk,   "campaign_name_1"="2",  2017-06-06 17:09:31

每个代码的实际输出:

 campaign_file_name_1, shagdhsjagdhjsagdhrSqpaKa5saoaus89,   """campaign_name_1""=""1""",  2017-06-06 17:09:31
 campaign_file_name_1, sagdhsagdhasjkjkasihdklas872hjsdjk,   """campaign_name_1""=""2""",  2017-06-06 17:09:31

Spark Code:

 object campaignResultsMergerETL extends BaseETL {


      val now  = ApplicationUtil.getCurrentTimeStamp()
      val conf = new Configuration()
      val fs  = FileSystem.get(conf)
      val log = LoggerFactory.getLogger(this.getClass.getName)

           def main(args: Array[String]): Unit = {
               //---------------------
                  code for sqlContext Initialization 

              //---------------------
            val campaignResultsDF  = sqlContext.read.format("com.databricks.spark.avro").load(campaignResultsLoc)
           campaignResultsDF.registerTempTable("campaign_results")
           val campaignGroupedDF =  sqlContext.sql(
  """
    |SELECT campaign_file_name,
    |campaign_name,
    |tracker_id,
    |SUM(campaign_measure) AS campaign_measure
    |FROM campaign_results
    |GROUP BY campaign_file_name,campaign_name,tracker_id
  """.stripMargin)

campaignGroupedDF.registerTempTable("campaign_results_full")

            val campaignMergedDF =  sqlContext.sql(
      s"""
        |SELECT campaign_file_name,
        |tracker_id,
        |CONCAT('\"',campaign_name, '\"','=','\"',campaign_measure,'\"'),
        |"$now" AS audit_timestamp
        |FROM campaign_results_full
""".stripMargin)

    saveAsCSVFiles(campaignMergedDF,campaignResultsExportLoc,numPartitions)

 }


     def saveAsCSVFiles(campaignMeasureDF:DataFrame,hdfs_output_loc:String,numPartitions:Int): Unit =
         {
              log.info("saveAsCSVFile method started")
              if (fs.exists(new Path(hdfs_output_loc))){
              fs.delete(new Path(hdfs_output_loc), true)
         }
         campaignMeasureDF.repartition(numPartitions).write.format("com.databricks.spark.csv").save(hdfs_output_loc)
        log.info("saveAsCSVFile method ended")
       }

    }

有人可以帮我解决这个问题吗?

1 个答案:

答案 0 :(得分:2)

看起来您在=参数中错误地附上了CONCAT。尝试:

|CONCAT('\"',campaign_name, '\"','=','\"',campaign_measure,'\"'),

[UPDATE]

也许你的Spark版本与我的不同,它似乎对我有效:

val df = Seq(("x", "y")).toDF("a", "b")

df.createOrReplaceTempView("df")

val df2 = spark.sqlContext.sql("""SELECT a, b, CONCAT('"', a, '"="', b, '"') as a_eq_b FROM df""")

df2.show
+---+---+-------+
|  a|  b| a_eq_b|
+---+---+-------+
|  x|  y|"x"="y"|
+---+---+-------+

df2.coalesce(1).write.option("header", "true").csv("/path/to/df2.csv")

/path/to/df2.csv content:
a,b,a_eq_b
x,y,"\"x\"=\"y\""

现在,您可以选择将引号设为null,如下所示:

df2.coalesce(1).write.option("header", "true").option("quote", "\u0000").csv("/path/to/df2null.csv")

/path/to/df2null.csv content:
a,b,a_eq_b
x,y,"x"="y"

但请注意,如果您需要在Spark上阅读CSV,请确保包含相同的quote选项。