如何使用spark将scala列表保存到mongodb

时间:2017-02-21 01:54:44

标签: mongodb scala apache-spark

所以我有一个火花代码从mongodb获取一些文件,进行一些转换并尝试将其存储回mongodb。

当我尝试使用以下函数持久化List对象时,会出现问题:

首先,我使用此函数生成一些元组:

val usersRDD = rdd.flatMap( breakoutFileById ).distinct().groupByKey().mapValues(_.toList)

然后我使用自定义mapToDocument函数将元组字段转换为Documents并调用saveToMongoDB函数:

usersRDD.map( mapToDocument ).saveToMongoDB()

我收到以下错误消息:

org.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for class scala.collection.immutable.$colon$colon.
    at org.bson.codecs.configuration.CodecCache.getOrThrow(CodecCache.java:46)
    at org.bson.codecs.configuration.ProvidersCodecRegistry.get(ProvidersCodecRegistry.java:63)
    at org.bson.codecs.configuration.ChildCodecRegistry.get(ChildCodecRegistry.java:51)
    at org.bson.codecs.DocumentCodec.writeValue(DocumentCodec.java:174)
    at org.bson.codecs.DocumentCodec.writeMap(DocumentCodec.java:189)
    at org.bson.codecs.DocumentCodec.encode(DocumentCodec.java:131)
    at org.bson.codecs.DocumentCodec.encode(DocumentCodec.java:45)
    at org.bson.codecs.BsonDocumentWrapperCodec.encode(BsonDocumentWrapperCodec.java:63)
    at org.bson.codecs.BsonDocumentWrapperCodec.encode(BsonDocumentWrapperCodec.java:29)
    at com.mongodb.connection.InsertCommandMessage.writeTheWrites(InsertCommandMessage.java:101)
    at com.mongodb.connection.InsertCommandMessage.writeTheWrites(InsertCommandMessage.java:43)
    at com.mongodb.connection.BaseWriteCommandMessage.encodeMessageBodyWithMetadata(BaseWriteCommandMessage.java:129)
    at com.mongodb.connection.RequestMessage.encodeWithMetadata(RequestMessage.java:160)
    at com.mongodb.connection.WriteCommandProtocol.sendMessage(WriteCommandProtocol.java:212)
    at com.mongodb.connection.WriteCommandProtocol.execute(WriteCommandProtocol.java:101)
    at com.mongodb.connection.InsertCommandProtocol.execute(InsertCommandProtocol.java:67)
    at com.mongodb.connection.InsertCommandProtocol.execute(InsertCommandProtocol.java:37)
    at com.mongodb.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:159)
    at com.mongodb.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:286)
    at com.mongodb.connection.DefaultServerConnection.insertCommand(DefaultServerConnection.java:115)
    at com.mongodb.operation.MixedBulkWriteOperation$Run$2.executeWriteCommandProtocol(MixedBulkWriteOperation.java:455)
    at com.mongodb.operation.MixedBulkWriteOperation$Run$RunExecutor.execute(MixedBulkWriteOperation.java:646)
    at com.mongodb.operation.MixedBulkWriteOperation$Run.execute(MixedBulkWriteOperation.java:401)
    at com.mongodb.operation.MixedBulkWriteOperation$1.call(MixedBulkWriteOperation.java:179)
    at com.mongodb.operation.MixedBulkWriteOperation$1.call(MixedBulkWriteOperation.java:168)
    at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:230)
    at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:221)
    at com.mongodb.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:168)
    at com.mongodb.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:74)
    at com.mongodb.Mongo.execute(Mongo.java:781)
    at com.mongodb.Mongo$2.execute(Mongo.java:764)
    at com.mongodb.MongoCollectionImpl.insertMany(MongoCollectionImpl.java:323)
    at com.mongodb.MongoCollectionImpl.insertMany(MongoCollectionImpl.java:311)
    at com.mongodb.spark.MongoSpark$$anonfun$save$1$$anonfun$apply$1$$anonfun$apply$2.apply(MongoSpark.scala:132)
    at com.mongodb.spark.MongoSpark$$anonfun$save$1$$anonfun$apply$1$$anonfun$apply$2.apply(MongoSpark.scala:132)
    at scala.collection.Iterator$class.foreach(Iterator.scala:742)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
    at com.mongodb.spark.MongoSpark$$anonfun$save$1$$anonfun$apply$1.apply(MongoSpark.scala:132)
    at com.mongodb.spark.MongoSpark$$anonfun$save$1$$anonfun$apply$1.apply(MongoSpark.scala:131)
    at com.mongodb.spark.MongoConnector$$anonfun$withCollectionDo$1.apply(MongoConnector.scala:186)
    at com.mongodb.spark.MongoConnector$$anonfun$withCollectionDo$1.apply(MongoConnector.scala:184)
    at com.mongodb.spark.MongoConnector$$anonfun$withDatabaseDo$1.apply(MongoConnector.scala:171)
    at com.mongodb.spark.MongoConnector$$anonfun$withDatabaseDo$1.apply(MongoConnector.scala:171)
    at com.mongodb.spark.MongoConnector.withMongoClientDo(MongoConnector.scala:154)
    at com.mongodb.spark.MongoConnector.withDatabaseDo(MongoConnector.scala:171)
    at com.mongodb.spark.MongoConnector.withCollectionDo(MongoConnector.scala:184)
    at com.mongodb.spark.MongoSpark$$anonfun$save$1.apply(MongoSpark.scala:131)
    at com.mongodb.spark.MongoSpark$$anonfun$save$1.apply(MongoSpark.scala:130)
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:925)
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:925)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

如果我删除mapToDocument函数中的列表(不要在文档中放置字段),一切正常。我已经在互联网上搜索类似的问题,我找不到合适的解决方案。 有没有人知道如何解决它?

提前致谢

1 个答案:

答案 0 :(得分:3)

从文档中的unsupported types部分:

  

某些Scala类型(例如列表)不受支持且应进行转换   他们的Java等价物。要将Scala转换为本机类型   包含以下import语句以使用 .asJava 方法。

import scala.collection.JavaConverters._
import org.bson.Document

val documents = sc.parallelize(
  Seq(new Document("fruits", List("apples", "oranges", "pears").asJava))
)
MongoSpark.save(documents)

它们不受支持的原因是由于Mongo Spark Connector使用下面的Mongo Java驱动程序,因此在此上下文中使用Scala异步驱动程序毫无意义。但是,它确实意味着RDD必须映射到支持的Java类型。使用数据集时,会自动为您完成这些转换。