我通过Jupyter笔记本将数据从Azure Blob存储csv文件导入到我的Spark时遇到问题。我正在尝试实现有关ML和Spark的教程之一。当我像这样填充Jupyter笔记本时:
import sqlContext.implicits._
val flightDelayTextLines = sc.textFile("wasb://sparkcontainer@[my account].blob.core.windows.net/sparkcontainer/Scored_FlightsAndWeather.csv")
case class AirportFlightDelays(OriginAirportCode:String,OriginLatLong:String,Month:Integer,Day:Integer,Hour:Integer,Carrier:String,DelayPredicted:Integer,DelayProbability:Double)
val flightDelayRowsWithoutHeader = flightDelayTextLines.map(s => s.split(",")).filter(line => line(0) != "OriginAirportCode")
val resultDataFrame = flightDelayRowsWithoutHeader.map(
s => AirportFlightDelays(
s(0), //Airport code
s(13) + "," + s(14), //Lat,Long
s(1).toInt, //Month
s(2).toInt, //Day
s(3).toInt, //Hour
s(5), //Carrier
s(11).toInt, //DelayPredicted
s(12).toDouble //DelayProbability
)
).toDF()
resultDataFrame.write.mode("overwrite").saveAsTable("FlightDelays")
我收到这样的错误:
SparkSession available as 'spark'.
<console>:23: error: not found: value sqlContext
import sqlContext.implicits._
^
我使用了短路路径以及("wasb:///sparkcontainer/Scored_FlightsAndWeather.csv"
)这个相同的错误。
有任何想法吗?
BR
马雷克