如何为Spark Structural Streaming创建KinesisSink

时间:2018-03-13 11:22:26

标签: apache-spark spark-streaming amazon-kinesis

我在Databricks上使用Spark 2.2并尝试实现Kinesis接收器以从Spark写入Kinesis流。

我正在使用以下提供的示例https://docs.databricks.com/_static/notebooks/structured-streaming-kinesis-sink.html

Int

然后我使用

实现KinesisSink类
/**
* A simple Sink that writes to the given Amazon Kinesis `stream` in the given    `region`. For authentication, users may provide
* `awsAccessKey` and `awsSecretKey`, or use IAM Roles when launching their cluster.
*
* This Sink takes a two column Dataset, with the columns being the `partitionKey`, and the `data` respectively.
* We will buffer data up to `maxBufferSize` before flushing to Kinesis in order to reduce cost.
*/
class KinesisSink(
    stream: String,
    region: String,
    awsAccessKey: Option[String] = None,
    awsSecretKey: Option[String] = None) extends ForeachWriter[(String, Array[Byte])] { 

 // Configurations
 private val maxBufferSize = 500 * 1024 // 500 KB

 private var client: AmazonKinesis = _
 private val buffer = new ArrayBuffer[PutRecordsRequestEntry]()
 private var bufferSize: Long = 0L

 override def open(partitionId: Long, version: Long): Boolean = {
   client = createClient
   true
 }

 override def process(value: (String, Array[Byte])): Unit = {
   val (partitionKey, data) = value
   // Maximum of 500 records can be sent with a single `putRecords` request
   if ((data.length + bufferSize > maxBufferSize && buffer.nonEmpty) ||    buffer.length == 500) {
  flush()
}
buffer += new PutRecordsRequestEntry().withPartitionKey(partitionKey).withData(ByteBuffer.wrap(data))
bufferSize += data.length
 }

 override def close(errorOrNull: Throwable): Unit = {
   if (buffer.nonEmpty) {
     flush()
   }
   client.shutdown()
 }

 /** Flush the buffer to Kinesis */
 private def flush(): Unit = {
   val recordRequest = new PutRecordsRequest()
     .withStreamName(stream)
     .withRecords(buffer: _*)

   client.putRecords(recordRequest)
   buffer.clear()
   bufferSize = 0
 }

 /** Create a Kinesis client. */
 private def createClient: AmazonKinesis = {
   val cli = if (awsAccessKey.isEmpty || awsSecretKey.isEmpty) {
     AmazonKinesisClientBuilder.standard()
       .withRegion(region)
       .build()
   } else {
     AmazonKinesisClientBuilder.standard()
       .withRegion(region)
       .withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(awsAccessKey.get, awsSecretKey.get)))
       .build()
   }
   cli
 }
}

最后我使用这个接收器创建了一个流。此KinesisSink采用两列数据集,其中列分别为val kinesisSink = new KinesisSink("us-east-1", "MyStream", Option("xxx..."), Option("xxx...")) partitionKey

data

但我仍然收到以下错误

case class MyData(partitionKey: String, data: Array[Byte])

val newsDataDF = kinesisDF
   .selectExpr("apinewsseqid", "fullcontent").as[MyData]
   .writeStream
   .outputMode("append")
   .foreach(kinesisSink)
   .start

2 个答案:

答案 0 :(得分:0)

您需要更改KinesisSink.process方法的签名,它应该使用您自定义的MyData对象,然后从那里提取partitionKeydata

答案 1 :(得分:0)

我使用了与数据砖提供的KinesisSink完全相同的方法,并通过使用

创建数据集使它起作用
val dataset = df.selectExpr("CAST(rand() AS STRING) as partitionKey","message_bytes").as[(String, Array[Byte])]

并使用数据集写入运动学流

val query = dataset
.writeStream
.foreach(kinesisSink)
.start()
.awaitTermination()

我使用dataset.selectExpr("partitionKey","message_bytes")时遇到了相同的类型不匹配错误:

  

错误:类型不匹配;
  找到:KinesisSink
  必需:org.apache.spark.sql.ForeachWriter [(String,Array [Byte])]
  .foreach(kinesisSink)

在这种情况下,不需要

selectExpr,因为数据集驱动了ForeachWriter的数据类型。