通过Spark将Kafka消息保存到HBase中。会话永远不会结束

时间:2016-08-24 16:30:28

标签: apache-spark hbase apache-kafka spark-streaming hortonworks-data-platform

我正在尝试使用Spark流来接收来自Kafka的消息,将它们转换为Put并插入HBase。 我创建一个inputDstream来接收来自Kafka的消息,然后创建一个JobConf,最后使用saveAsHadoopDataset(JobConf)将记录保存到HBase中。

每次将记录插入HBase时,都会设置从Hbase到zookeeper的会话,但永远不会关闭。如果连接数增加超过zookeeper的最大客户端连接数,则火花流崩溃。

我的代码如下所示:

import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.spark.SparkConf
import org.apache.spark.streaming.StreamingContext
import org.apache.hadoop.hbase.client.Put
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.hbase.mapred.TableOutputFormat
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.mapred.JobConf
import org.apache.spark.streaming.Seconds
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.kafka._
import kafka.serializer.StringDecoder

object ReceiveKafkaAsDstream {

  def main(args: Array[String]): Unit = {
    val sparkConf = new SparkConf().setAppName("ReceiveKafkaAsDstream")
    val ssc = new StreamingContext(sparkConf, Seconds(1))

    val topics = "test"
    val brokers = "10.0.2.15:6667"

    val topicSet = topics.split(",").toSet
    val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers)

    val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topicSet)

    val tableName = "KafkaTable"
    val conf = HBaseConfiguration.create()
    conf.set("zookeeper.znode.parent", "/hbase-unsecure")
    conf.set("hbase.zookeeper.property.clientPort", "2181")
    conf.set(TableOutputFormat.OUTPUT_TABLE, tableName)

    val jobConfig: JobConf = new JobConf(conf, this.getClass)
    jobConfig.set("mapreduce.output.fileoutputformat", "/user/root/out")
    jobConfig.setOutputFormat(classOf[TableOutputFormat])
    jobConfig.set(TableOutputFormat.OUTPUT_TABLE, tableName)

      val records = messages
        .map(_._2)
        .map(SampleKafkaRecord.parseToSampleRecord)
      records.print()  
      records.foreachRDD{ stream => stream.map(SampleKafkaRecord.SampleToHbasePut).saveAsHadoopDataset(jobConfig) }

    ssc.start()
    ssc.awaitTermination()
  }

  case class SampleKafkaRecord(id: String, name: String)
  object SampleKafkaRecord extends Serializable {
    def parseToSampleRecord(line: String): SampleKafkaRecord = {
      val values = line.split(";")
      SampleKafkaRecord(values(0), values(1))
    }

    def SampleToHbasePut(CSVData: SampleKafkaRecord): (ImmutableBytesWritable, Put) = {
      val rowKey = CSVData.id
      val putOnce = new Put(rowKey.getBytes)

      putOnce.addColumn("cf1".getBytes, "column-Name".getBytes, CSVData.name.getBytes)
      return (new ImmutableBytesWritable(rowKey.getBytes), putOnce)
    }
  }
}

我将SSC(SparkStreamingContext)的持续时间设置为1s,并在zookeeper conf文件zoo.cfg中将maxClientCnxns设置为10,因此从一个客户端到zookeeper最多允许10个连接。

10秒后(从HBase到zookeeper设置10个会话),我收到如下错误:

16/08/24 14:59:30 WARN RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase-unsecure/hbaseid
16/08/24 14:59:31 INFO ClientCnxn: Opening socket connection to server localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
16/08/24 14:59:31 INFO ClientCnxn: Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session
16/08/24 14:59:31 WARN ClientCnxn: Session 0x0 for server localhost.localdomain/127.0.0.1:2181, unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
        at sun.nio.ch.IOUtil.read(IOUtil.java:192)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:68)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1125)

据我了解,此错误的存在是因为连接数超过了zookeeper的最大连接数。如果我将maxClientCnxn设置为20,则流处理能够持续20秒。我知道我可以将maxClientCnxn设置为无限制,但我真的不认为这是解决这个问题的好方法。

另一件事是,如果我使用TextFileStream将文本文件作为DStream获取并使用saveAsHadoopDataset(jobConf)将它们保存到hbase中,它运行得非常好。如果我只是使用val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers)从kafka读取数据并只是打印信息,那么也没有问题。当我收到kafka消息然后将它们保存到应用程序中的HBase时出现问题。

我的环境是HDP 2.4沙箱。版本spark:1.6,hbase:1.1.2,kafka:2.10.0,zookeeper:3.4.6。

感谢任何帮助。

1 个答案:

答案 0 :(得分:2)

好吧,最后我明白了。

  1. 属性集:
  2. 有一个名为“zookeeper.connection.timeout.ms”的属性。此属性应设置为1。

    1. 更改为新API:
    2. 将方法saveAsHadoopDataset(JobConf)更改为saveAsNewAPIHadoopDataset(JobConf)。我仍然不知道为什么旧的API无效。

      import org.apache.hadoop.hbase.mapred.TableOutputFormat更改为import org.apache.hadoop.hbase.mapreduce.TableOutputFormat