Scala Spark中的udf运行时错误

时间:2017-06-12 21:50:44

标签: scala apache-spark apache-spark-sql spark-dataframe udf

我试图在DataFrame中创建一个新列。此新列将包含从长时间戳创建的格式化数据字符串,以毫秒为单位。

我一直收到这个错误:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.spark.sql.DataFrameReader.jdbc(Ljava/lang/String;Ljava/lang/String;Ljava/util/Properties;)Lorg/apache/spark/sql/Dataset;

它出现在这段代码中:

import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import org.apache.spark.sql.{DataFrame, SQLContext}
import joptsimple.OptionParser
import org.apache.spark.sql.functions._
import java.text.SimpleDateFormat

import org.apache.spark.sql.functions.udf
    .
    .
    .
    val formatDateUDF = udf((ts: Long) => {
      new SimpleDateFormat("yyyy.MM.dd.HH.mm.ss").format(ts)
    })

我在build.sbt中使用以下依赖项:

scalaVersion := "2.11.11"

libraryDependencies ++= Seq(
  // Spark dependencies
  "org.apache.spark" % "spark-hive_2.11" % "2.1.1" % "provided",
  "org.apache.spark" % "spark-mllib_2.11" % "2.1.1" % "provided",
  // Third-party libraries
  "postgresql" % "postgresql" % "9.1-901-1.jdbc4",
  "net.sf.jopt-simple" % "jopt-simple" % "5.0.3",
  "org.scalactic" %% "scalactic" % "3.0.1",
  "org.scalatest" %% "scalatest" % "3.0.1" % "test",
  "joda-time" % "joda-time" % "2.9.9"
)

我可以采取其他方式做到这一点,这可能更容易(或者至少是工作)。

1 个答案:

答案 0 :(得分:0)

我认为from_unixtime方法应该更好用吗?

val input = List(
  ("a",1497348453L),
  ("b",1497345453L),
  ("c",1497341453L),
  ("d",1497340453L)
).toDF("name", "timestamp")


input.select(
  'name,
  from_unixtime('timestamp, "yyyy.MM.dd.HH.mm.ss").alias("timestamp_formatted")
).show()

输出:

+----+-------------------+
|name|timestamp_formatted|
+----+-------------------+
|   a|2017.06.13.12.07.33|
|   b|2017.06.13.11.17.33|
|   c|2017.06.13.10.10.53|
|   d|2017.06.13.09.54.13|
+----+-------------------+
相关问题