Spark映射字符串到java.sql.Timestamp会产生非确定性异常

时间:2016-02-24 20:33:27

标签: scala apache-spark

作为学习Spark的一部分,我试图分析内部缺陷跟踪系统的问题。下面是我正在使用的示例csv数据:

#;Projekt;Temat;Status;Typ zagadnienia;Miejsce wystąpienia;Obszar;Data utworzenia;Data zamknięcia;Przepracowany czas
10317;CENTRALA;some random topic;INBOX;Wsparcie;some place;some area;2016-02-22 13:33;;0,5
10315;CENTRALA;some random topic;Rozwiązany;Wsparcie;some place;some area;2016-02-22 13:28;2016-02-22 17:52;0,5
10313;CENTRALA;some random topic;Weryfikacja;Utrudnione działanie systemu;some place;some area;2016-02-22 12:39;;0,75
10311;CENTRALA;some random topic;Przypisany;Wsparcie;some place;some area;2016-02-22 11:57;;0
10309;CENTRALA;some random topic;INBOX;Wsparcie;some place;some area;2016-02-22 11:50;;0,83
10307;CENTRALA;some random topic;Rozwiązany;Wsparcie;some place;some area;2016-02-22 11:35;2016-02-22 13:18;0,42
10305;CENTRALA;some random topic;Przypisany;Usterka części systemu;some place;some area;2016-02-22 10:47;;0
10303;CENTRALA;some random topic;Nowy;Wsparcie;some place;some area;2016-02-22 10:39;;0
10301;CENTRALA;some random topic;Rozwiązany;Wsparcie;some place;some area;2016-02-20 11:30;2016-02-22 15:53;0,25
10297;CENTRALA;some random topic;INBOX;Utrudnione działanie systemu;some place;some area;2016-02-19 15:52;;0
10295;CENTRALA;some random topic;Przypisany;Utrudnione działanie systemu;some place;some area;2016-02-19 15:51;;0
10293;CENTRALA;some random topic;Rozwiązany;Wsparcie;some place;some area;2016-02-19 14:25;2016-02-19 14:25;0,25
10291;CENTRALA;some random topic;Przypisany;Wsparcie;some place;some area;2016-02-19 14:24;;0
10289;CENTRALA;some random topic;Rozwiązany;Wsparcie;some place;some area;2016-02-19 12:03;2016-02-19 15:00;1
10287;CENTRALA;some random topic;Weryfikacja;Wsparcie;some place;some area;2016-02-19 10:12;;0,33
10285;CENTRALA;some random topic;Dostępny na PRD;Usterka części systemu;some place;some area;2016-02-19 08:00;;1,5
10283;CENTRALA;some random topic;Nowy;Wsparcie;some place;some area;2016-02-18 18:56;;0
10281;CENTRALA;some random topic;Rozwiązany;Wsparcie;some place;some area;2016-02-18 16:59;2016-02-22 15:52;0,25
10279;CENTRALA;some random topic;Rozwiązany;Wsparcie;some place;some area;2016-02-18 16:33;2016-02-18 16:33;0,33
10277;CENTRALA;some random topic;Rozwiązany;Wsparcie;some place;some area;2016-02-18 16:04;2016-02-22 15:45;0,25

使用以下scala代码:

  import scala.util.Try
  import java.text.SimpleDateFormat
  import java.sql.Timestamp
  import org.apache.spark.sql._
  import org.apache.spark.sql.functions._

  val issuesCSV = sc.textFile("""./issues_3.csv""")

  case class Issue(id: Int, 
                 project: String, 
                 topic: String, 
                 status: String,
                 issue_type: String, 
                 location: List[String],
                 area: List[String],
                 opened: Timestamp, 
                 closed: Option[Timestamp], 
                 spent_time: Float)

  val formatter = new SimpleDateFormat("yyyy-MM-dd hh:mm");

  val sqlContext = new org.apache.spark.sql.SQLContext(sc)
  import sqlContext.implicits._

  val issues = issuesCSV
             .mapPartitionsWithIndex { (idx, iter) => if (idx == 0) iter.drop(1) else iter }
             .map(_.split(";"))
             .map(i => Issue(
           i(0).toInt, 
           i(1), 
           i(2), 
           i(3), 
           i(4), 
           i(5).split(',').toList, 
           i(6).split(',').toList, 
           new java.sql.Timestamp(formatter.parse(i(7)).getTime), 
           Try(new java.sql.Timestamp(formatter.parse(i(8)).getTime)).toOption, 
           i(9).replace(',','.').toFloat)).toDF()

  issues.printSchema()
  issues.count()
  issues.map(i => i(7)).collect()

现在我的问题是最后一行的行为是不确定的:有时它给了我预期的结果,更经常抛出异常,如下所示:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 60.0 failed 1 times, most recent failure: Lost task 0.0 in stage 60.0 (TID 112, localhost): java.lang.NumberFormatException: multiple points
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1890)
    at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110)
    at java.lang.Double.parseDouble(Double.java:538)
    at java.text.DigitList.getDouble(DigitList.java:169)
    at java.text.DecimalFormat.parse(DecimalFormat.java:2056)
    at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1869)
    at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1514)
    at java.text.DateFormat.parse(DateFormat.java:364)
    at $anonfun$2.apply(<console>:90)

错误总是与日期解析相关联,虽然每次错误都不同,&#34;多个seprators&#34; 对于输入字符串:&#34; .2216& #34; 对于输入字符串:&#34;&#34; (其中没有一个实际出现在文件中)。 现在我怀疑这与在多线程环境中在案例类中创建对象有关,但我真的不知道可能出错的是什么。 就环境而言,我正在使用spark-notebook,本地Scala [2.11.7] Spark [1.6.0] Hadoop [2.7.1]

1 个答案:

答案 0 :(得分:1)

SimpleDateFormat不是线程安全的,这会导致异常间歇发生。您需要使用线程安全类型(例如,Joda DateTimeFormat)或将使用限制为单个线程。