从dtabricks服务将spark graphframe加载到云中的neo4j db

时间:2018-06-26 08:11:57

标签: python scala apache-spark neo4j

我通过我的azure订阅使用databricks服务。 我还通过azure部署了neo4j db。 我试图从我在databricks平台上构建的graphframe对象中加载neo4j db。

我已经下载了spark-neo4j连接器,并尝试运行以下代码:

   %scala
   import org.neo4j.spark._ 
   import org.graphframes._ 
   import org.graphframes.GraphFrame

%python
   import pandas as pd
   from pyspark.sql.types import *
   from pyspark.sql import SQLContext
   from graphframes import *

#### create graphframe
   vertex=[ ("a", "Alice", 34),
     ("b", "Bob", 36),
     ("c", "Charlie", 30)]
   labelsV=["id", "name", "age"]

   v=pd.DataFrame.from_records(vertex,columns=labelsV)

## Convert into Spark DataFrame
   vertex = spark.createDataFrame(v)


   Edges=[  ("a", "b", "friend"),
     ("b", "c", ""),
     ("c", "b", "follow")]
   labelsE=["src", "dst", "relationship"]
   e=pd.DataFrame.from_records(Edges,columns=labelsE)
## Convert into Spark DataFrame
   Edges = spark.createDataFrame(e)

我不了解如何从databricks服务配置为neo4j主机。 我尝试过:

%scala
   dbms.connector.http.enabled=true
   dbms.connector.http.listen_address="<url>"

我遇到以下错误:

command-2039977334603380:47: error: not found: value dbms
val $ires14 = dbms.connector.http.enabled

我在做什么错? tnx

0 个答案:

没有答案