将Neo4j与Apache Spark结合使用

时间:2015-03-06 10:31:13

标签: java serialization apache-spark neo4j

我正在尝试将Neo4j与Apache Spark Streaming一起使用,但我发现可串行化是一个问题。

基本上,我希望Apache Spark能够实时解析和捆绑我的数据。之后,数据已被捆绑,它应该存储在数据库Neo4j中。但是,我收到了这个错误:

org.apache.spark.SparkException: Task not serializable
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:1264)
    at org.apache.spark.api.java.JavaRDDLike$class.foreach(JavaRDDLike.scala:297)
    at org.apache.spark.api.java.JavaPairRDD.foreach(JavaPairRDD.scala:45)
    at twoGrams.Main$4.call(Main.java:102)
    at twoGrams.Main$4.call(Main.java:1)
    at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:282)
    at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:282)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:41)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:40)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:40)
    at scala.util.Try$.apply(Try.scala:161)
    at org.apache.spark.streaming.scheduler.Job.run(Job.scala:32)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:172)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.NotSerializableException: org.neo4j.kernel.EmbeddedGraphDatabase
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1183)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:42)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:73)
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:164)
    ... 17 more

这是我的代码:

output a stream of type: JavaPairDStream<String, ArrayList<String>>

output.foreachRDD(
                new Function2<JavaPairRDD<String,ArrayList<String>>,Time,Void>(){

                    @Override
                    public Void call(
                            JavaPairRDD<String, ArrayList<String>> arg0,
                            Time arg1) throws Exception {
                        // TODO Auto-generated method stub

                        arg0.foreach(
                                new VoidFunction<Tuple2<String,ArrayList<String>>>(){

                                    @Override
                                    public void call(
                                            Tuple2<String, ArrayList<String>> arg0)
                                            throws Exception {
                                        // TODO Auto-generated method stub
                                        try( Transaction tx = graphDB.beginTx()){
                                            if(Neo4jOperations.getHMacFromValue(graphDB, arg0._1)!=null)
                                                System.out.println("Alread in Database:" + arg0._1);
                                            else{
                                                Neo4jOperations.createHMac(graphDB, arg0._1);
                                            }
                                            tx.success();
                                        }
                                    }

                        });
                        return null;
                    }



                });

Neo4jOperations类:

public class Neo4jOperations{

public static Node getHMacFromValue(GraphDatabaseService graphDB,String value){
        try(ResourceIterator<Node> HMacs=graphDB.findNodesByLabelAndProperty(DynamicLabel.label("HMac"), "value", value).iterator()){
            return HMacs.next();
        }
    }

    public static void createHMac(GraphDatabaseService graphDB,String value){
        Node HMac=graphDB.createNode(DynamicLabel.label("HMac"));
        HMac.setProperty("value", value);
        HMac.setProperty("time", new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()));
    }
}

我知道我必须序列化Neo4jOperations类,但我能弄清楚如何。或者还有其他方法可以达到这个目的吗?

2 个答案:

答案 0 :(得分:1)

如果涉及与外部系统的连接或处理不可序列化的对象,则可以直接在工作人员上创建这些对象,并避免需要序列化。

Given: val stream: DStream = ???
stream.forEachRDD{rdd =>
   rdd.forEachPartition{iter =>
       val nonSerializableConn = new NonSerializableDriver(ip, port)
       iter.foreach(elem => nonSerializableConn.doStuff(elem)
   }
}

此模式通过每个分区(包含许多元素)只执行一次来分摊对象创建

在像Spark Streaming这样的长期流程中,我们可以通过保持每个VM的资源缓存来进一步减少开销:

stream.forEachRDD{rdd =>
   rdd.forEachPartition{iter =>
       val nonSerializableConn = NonSerializableDriver.getConnection(ip, port)
       iter.foreach(elem => nonSerializableConn.doStuff(elem)
   }
}

在后一种情况下,我们需要在VM终止时进行连接管理和关闭资源。

答案 1 :(得分:0)

没有可能的方法来序列化Neo4jOperations类中包含的传递依赖项。不幸的是,Spark不会那样工作。

问题是Neo4j遍历API无法序列化或捆绑并分派给Spark。即使您尝试将Spark捆绑到Neo4j中,您也会遇到与Jetty servlet版本的依赖冲突。

这就是我创建Neo4j Mazerunner的原因。在创建扩展Spark RDD软件包基类的Neo4j Spark连接器之前,没有一种简单的方法可以将数据从Neo4j导入Spark的运行时。

请参阅Couchbase's Spark Connector以了解执行此操作所涉及的内容。

Mazerunner尚不支持流媒体功能,但我计划在未来实现这一目标