Azure HDI Spark导入sqlContext.implicits._错误

时间:2018-12-01 14:34:51

标签: azure apache-spark hdinsight

我通过Jupyter笔记本将数据从Azure Blob存储csv文件导入到我的Spark时遇到问题。我正在尝试实现有关ML和Spark的教程之一。当我像这样填充Jupyter笔记本时:

import sqlContext.implicits._
val flightDelayTextLines = sc.textFile("wasb://sparkcontainer@[my account].blob.core.windows.net/sparkcontainer/Scored_FlightsAndWeather.csv")

case class AirportFlightDelays(OriginAirportCode:String,OriginLatLong:String,Month:Integer,Day:Integer,Hour:Integer,Carrier:String,DelayPredicted:Integer,DelayProbability:Double)

val flightDelayRowsWithoutHeader = flightDelayTextLines.map(s => s.split(",")).filter(line => line(0) != "OriginAirportCode")

val resultDataFrame = flightDelayRowsWithoutHeader.map(
    s => AirportFlightDelays(
        s(0), //Airport code
        s(13) + "," + s(14), //Lat,Long
        s(1).toInt, //Month
        s(2).toInt, //Day
        s(3).toInt, //Hour
        s(5), //Carrier
        s(11).toInt, //DelayPredicted
        s(12).toDouble //DelayProbability
        )
).toDF()

resultDataFrame.write.mode("overwrite").saveAsTable("FlightDelays") 

我收到这样的错误:

SparkSession available as 'spark'.
<console>:23: error: not found: value sqlContext
       import sqlContext.implicits._
              ^

我使用了短路路径以及("wasb:///sparkcontainer/Scored_FlightsAndWeather.csv")这个相同的错误。 有任何想法吗? BR 马雷克

1 个答案:

答案 0 :(得分:0)

当我看到您的代码片段时,没有看到sqlContext被创建,请参考以下代码并获取sqlContext,然后开始使用它。

val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._

enter image description here