Spark结构化流错误

时间:2018-05-22 18:22:08

标签: scala apache-spark streaming

我正在尝试执行以下代码:

This is one way you can do

//Instantiate default ssm client
AWSSimpleSystemsManagement ssmClient = AWSSimpleSystemsManagementClientBuilder.defaultClient()   
//create request object with the key and value in String`enter code here`
PutParameterRequest putRequest = new PutParameterRequest()
putRequest.setName(<key as String>)
putRequest.setValue(<value as String>)
putRequest.setType("String")
//Overwrite the key if it already exist, else create new
putRequest.setOverwrite(true)
ssmClient.putParameter(putRequest)

但是我收到了这个错误:

import org.apache.spark.sql.types._

val schema = StructType(
StructField("id", LongType, false) ::
StructField("name", StringType, true) ::
StructField("city", StringType, true) :: Nil)


case class Person(id: Long, name: String, city: String)

import org.apache.spark.sql.Encoders

val schema = Encoders.product[Person].schema


val people = spark.readStream.schema(schema).csv("/data/pncdw/scratch/test/*.csv").as[Person]

val population = people.groupBy('city).agg(count('city) as "population")

import scala.concurrent.duration._

import org.apache.spark.sql.streaming.{OutputMode, Trigger}

val populationStream = population.
writeStream.
format("console").
trigger(Trigger.ProcessingTime(30.seconds)).
outputMode(OutputMode.Complete).
queryName("textStream").start

我已经加载了所有的jar和依赖项,如下所示。

ERROR : scala> populationStream.trigger(Trigger.once)
<console>:43: error: not found: value Trigger
populationStream.trigger(Trigger.once)

1 个答案:

答案 0 :(得分:0)

triggerDataStreamWriter的属性。

您是通过populationStream方法返回的StreamingQuery start来呼叫它。

如果要设置查询,请替换现有查询:

val populationStream = population.
  writeStream.
  format("console").
  trigger(Trigger.Once).
  outputMode(OutputMode.Complete).
  queryName("textStream").start