我在Scala中具有以下功能:
def getData(spark: SparkSession,
indices: Option[String]): Option[DataFrame] = {
indices match {
case None => {
println("Undefined field.")
None
}
case Some(ind) => {
val df = spark
.read.format("org.elasticsearch.spark.sql")
.load(ind)
df
}
}
}
但是,出现编译错误:
类型sql.DataFrame的表达式不符合预期的类型 选项[sql.DataFrame]
我试图返回Option[df]
,但是它都不会编译。
答案 0 :(得分:1)
将您的Array //1st_array
(
[0] => Array
(
[timestamp] => 7/10/2018 15:24:06
[username] => giakhang
[status] => Yes
)
[1] => Array
(
[timestamp] => 8/10/2018 5:11:25
[username] => haophan
[status] => No
)
[2] => Array
(
[timestamp] => 8/10/2018 6:38:18
[username] => TTQ1504
[status] => No
)
[3] => Array
(
[timestamp] => 08/10/2018 7:04:20
[username] => btcgainer24724
[status] => Yes
)
)
Array //2nd_array
(
[0] => Array
(
[timestamp] => 8/10/2018 5:10:06
[username] => giakhang
)
[1] => Array
(
[timestamp] => 8/10/2018 5:13:25
[username] => btcgainer24724
)
)
换成df
:
Some
答案 1 :(得分:1)
您返回了错误的类型:
class MyComponent extend React.Component {
constructor(props) {
super(props);
this.state = {
isNavigationBlocked: false;
}
}
render(){
return (<div>
<Prompt when={this.state.isNavigationBlocked}/>
</div>)
}
}
或更惯用
def getData(spark: SparkSession,
indices: Option[String]): Option[DataFrame] = {
indices match {
case None => {
println("Undefined field.")
None
}
case Some(ind) => {
val df = spark
.read.format("org.elasticsearch.spark.sql")
.load(ind)
Some(df) // <- Here!!!
}
}
}