基本上我是一名java开发人员&现在我有机会参与Spark&我经历了Spark api的基础知识,比如SparkConfig,SparkContaxt,RDD,SQLContaxt,DataFrame,DataSet&然后我可以使用RDD,SQL执行一些简单的简单转换....但是当我尝试使用java来训练一些示例graphframe应用程序时,我能够成功并且能够成功。我经历了很多youtube教程,论坛& stackoverflow线程,但没有我找不到任何直接的建议或解决方案。当我尝试创建一个GraphFrame类和对象的对象时,我正面临这个问题。我已经下载了接收jar( graphframes-0.2.0-spark2.0-s_2.11.jar )但现在仍然面临问题我想把我的分析放到我到达的地方,因为Spark很新我无法继续前进,所以如果有人帮助我,它对所有人都非常有帮助。提前致谢。我面临的例外是构造函数GraphFrame(DataFrame,DataFrame)未定义
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;
import org.apache.spark.storage.StorageLevel;
import org.graphframes.GraphFrame;
import com.fasterxml.jackson.core.JsonParseException;
import com.fasterxml.jackson.databind.JsonMappingException;
public class SparkJavaGraphFrameOne {
public static void main(String[] args) throws JsonParseException, JsonMappingException, IOException{
SparkConf conf = new SparkConf().setAppName("test").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);
JavaRDD<Row> verRow = sc.parallelize(Arrays.asList(RowFactory.create(1,"A"),RowFactory.create(2,"B")));
JavaRDD<Row> edgRow = sc.parallelize(Arrays.asList(RowFactory.create(1,2,"Edge")));
List<StructField> verFields = new ArrayList<StructField>();
verFields.add(DataTypes.createStructField("id",DataTypes.IntegerType, true));
verFields.add(DataTypes.createStructField("name",DataTypes.StringType, true));
List<StructField> EdgFields = new ArrayList<StructField>();
EdgFields.add(DataTypes.createStructField("fromId",DataTypes.IntegerType, true));
EdgFields.add(DataTypes.createStructField("toId",DataTypes.IntegerType, true));
EdgFields.add(DataTypes.createStructField("name",DataTypes.StringType, true));
StructType verSchema = DataTypes.createStructType(verFields);
StructType edgSchema = DataTypes.createStructType(EdgFields);
DataFrame verDF = sqlContext.createDataFrame(verRow, verSchema);
DataFrame edgDF = sqlContext.createDataFrame(edgRow, edgSchema);
GraphFrame g = new GraphFrame(verDF,edgDF);
g.vertices().show();
g.edges().show();
g.persist(StorageLevel.MEMORY_AND_DISK());
}
}
答案 0 :(得分:7)
我使用Spark 2.0.0和GraphFrame 0.2.0在java中编写了示例程序。该计划基于http://graphframes.github.io/quick-start.html#start-using-graphframes给出的示例程序。希望这会有所帮助。
<强>的pom.xml 强>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.abaghel.examples.spark</groupId>
<artifactId>spark-graphframe</artifactId>
<version>1.0.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-graphx_2.11</artifactId>
<version>2.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.0.0</version>
</dependency>
<dependency>
<groupId>graphframes</groupId>
<artifactId>graphframes</artifactId>
<version>0.2.0-spark2.0-s_2.11</version>
</dependency>
</dependencies>
<repositories>
<!-- list of other repositories -->
<repository>
<id>SparkPackagesRepo</id>
<url>http://dl.bintray.com/spark-packages/maven</url>
</repository>
</repositories>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
<强> SparkGraphFrameSample.java 强>
package com.abaghel.examples.spark.graphframe;
import java.util.ArrayList;
import java.util.List;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import org.graphframes.GraphFrame;
import org.graphframes.lib.PageRank;
/**
* Sample application shows how to create a GraphFrame, query it, and run the PageRank algorithm.
*
* @author abaghel
*
*/
public class SparkGraphFrameSample {
public static void main(String[] args) {
SparkSession spark = SparkSession.builder()
.appName("SparkGraphFrameSample")
.config("spark.sql.warehouse.dir", "/file:C:/temp")
.master("local[2]")
.getOrCreate();
//Create a Vertex DataFrame with unique ID column "id"
List<User> uList = new ArrayList<User>() {
{
add(new User("a", "Alice", 34));
add(new User("b", "Bob", 36));
add(new User("c", "Charlie", 30));
}
};
Dataset<Row> verDF = spark.createDataFrame(uList, User.class);
//Create an Edge DataFrame with "src" and "dst" columns
List<Relation> rList = new ArrayList<Relation>() {
{
add(new Relation("a", "b", "friend"));
add(new Relation("b", "c", "follow"));
add(new Relation("c", "b", "follow"));
}
};
Dataset<Row> edgDF = spark.createDataFrame(rList, Relation.class);
//Create a GraphFrame
GraphFrame gFrame = new GraphFrame(verDF, edgDF);
//Get in-degree of each vertex.
gFrame.inDegrees().show();
//Count the number of "follow" connections in the graph.
long count = gFrame.edges().filter("relationship = 'follow'").count();
//Run PageRank algorithm, and show results.
PageRank pRank = gFrame.pageRank().resetProbability(0.01).maxIter(5);
pRank.run().vertices().select("id", "pagerank").show();
//stop
spark.stop();
}
}
<强> User.java 强>
package com.abaghel.examples.spark.graphframe;
/**
* User class
*
* @author abaghel
*
*/
public class User {
private String id;
private String name;
private int age;
public User(){
}
public User(String id, String name, int age) {
super();
this.id = id;
this.name = name;
this.age = age;
}
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
}
<强> Relation.java 强>
package com.abaghel.examples.spark.graphframe;
/**
* Relation class
*
* @author abaghel
*
*/
public class Relation {
private String src;
private String dst;
private String relationship;
public Relation(){
}
public Relation(String src, String dst, String relationship) {
super();
this.src = src;
this.dst = dst;
this.relationship = relationship;
}
public String getSrc() {
return src;
}
public void setSrc(String src) {
this.src = src;
}
public String getDst() {
return dst;
}
public void setDst(String dst) {
this.dst = dst;
}
public String getRelationship() {
return relationship;
}
public void setRelationship(String relationship) {
this.relationship = relationship;
}
}
控制台输出
16/08/27 22:34:45 INFO DAGScheduler: Job 10 finished: show at SparkGraphFrameSample.java:56, took 0.938910 s
16/08/27 22:34:45 INFO CodeGenerator: Code generated in 6.599005 ms
+---+-------------------+
| id| pagerank|
+---+-------------------+
| a| 0.01|
| b|0.08763274109799998|
| c| 0.077926810699|
+---+-------------------+
答案 1 :(得分:0)
我不知道你是否能够解决问题。我刚看到你的问题。我想要得到 线程&#34; main&#34;中的例外情况java.lang.NoClassDefFoundError:com / typesafe / scalalogging / slf4j / LazyLogging, 你需要将以下jar放在你的pom.xml
中 <dependency>
<groupId>com.typesafe.scala-logging</groupId>
<artifactId>scala-logging-slf4j_2.10</artifactId>
<version>2.1.2</version>
</dependency>
我遇到了同样的问题,通过添加这个jar,我就能解决这个问题。
答案 2 :(得分:0)
我能够在0.5.0-spark2.1-s_2.11中复制问题(连续运行)并在0.4.0-spark2.1-s_2.11中正常工作
答案 3 :(得分:0)
要解决GraphFrame构造函数问题,请尝试:
GraphFrame gf = GraphFrame.apply(verDF,edgeDF);