1.5.1 | Spark Streaming |使用SQL createDataFrame的NullPointerException

时间:2015-11-26 09:49:17

标签: apache-spark apache-spark-sql spark-streaming

我正在使用Spark 1.5.1。

在流媒体context中,我得到SQLContext如下

SQLContext sqlContext = SQLContext.getOrCreate(records.context()); DataFrame dataFrame = sqlContext.createDataFrame(record, SchemaRecord.class); dataFrame.registerTempTable("records");

记录是JavaRDD每条记录都有以下结构

public class SchemaRecord implements Serializable {

private static final long serialVersionUID = 1L; 
private String msisdn; 
private String application_type; 
//private long uplink_bytes = 0L;
}

当像msisdn和application_type这样的字段类型只有字符串时,一切正常。

当我添加另一个像uplink_bytes这样的类型的字段时,我得到了 在createDataFrame

之后跟随NullPointer异常
Exception in thread "main" java.lang.NullPointerException
at org.spark-project.guava.reflect.TypeToken.method(TypeToken.java:465)
at 
org.apache.spark.sql.catalyst.JavaTypeInference$$anonfun$2.apply(JavaTypeInference.scala:103)
at 
org.apache.spark.sql.catalyst.JavaTypeInference$$anonfun$2.apply(JavaTypeInference.scala:102)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
at org.apache.spark.sql.
catalyst.JavaTypeInference$.org$apache$spark$sql$catalyst$JavaTypeInference$$inferDataType(JavaTypeInference.scala:102)
at org.apache.spark.sql.catalyst.JavaTypeInference$.inferDataType(JavaTypeInference.scala:47)
at org.apache.spark.sql.SQLContext.getSchema(SQLContext.scala:1031)
at org.apache.spark.sql.SQLContext.createDataFrame(SQLContext.scala:519)
at org.apache.spark.sql.SQLContext.createDataFrame(SQLContext.scala:548)

请建议

2 个答案:

答案 0 :(得分:4)

您的问题可能是您的模型类不是一个干净的JavaBean。目前,Spark没有代码来处理具有setter但没有getter方法的属性。您可以尝试这样的方法来检查Spark如何理解您的课程:

PropertyDescriptor[] props = Introspector.getBeanInfo(YourClass.class).getPropertyDescriptors();
for(PropertyDescriptor prop:props) {
    System.out.println(prop.getDisplayName());
    System.out.println("\t"+prop.getReadMethod());
    System.out.println("\t"+prop.getWriteMethod());
}

introspector还会识别只有setter作为preoperties的字段,这会在Spark中抛出NullPointerException。

答案 1 :(得分:1)

这是我尝试过的,它起作用了: -

这是存储String,Long和Int值的POJO: -

import java.io.*;

public class TestingSQLPerson implements Serializable {

// Here is Data in a comma Separated file: -
// Sumit,20,123455
// Ramit,40,12345

private String name;
private int age;
private Long testL;

public Long getTestL() {
    return testL;
}

public void setTestL(Long testL) {
    this.testL = testL;
}

public String getName() {
    return name;
}

public void setName(String name) {
    this.name = name;
}

public int getAge() {
    return age;
}

public void setAge(int age) {
    this.age = age;
}

}

以下是Java中的Spark SQL代码: -

import org.apache.spark.*;
import org.apache.spark.sql.*;
import org.apache.spark.api.java.*;
import org.apache.spark.api.java.function.Function;

public class TestingLongSQLTypes {

public static void main(String[] args) {

    SparkConf javaConf = new SparkConf();
    javaConf.setAppName("Test Long TTyypes");
    JavaSparkContext javaCtx = new JavaSparkContext(javaConf);


    SQLContext sqlContext = new org.apache.spark.sql.SQLContext(javaCtx);

    String dataFile = "file:///home/ec2-user/softwares/crime-data/testfile.txt";
    JavaRDD<TestingSQLPerson> people = javaCtx.textFile(dataFile).map(
      new Function<String, TestingSQLPerson>() {
        public TestingSQLPerson call(String line) throws Exception {
          String[] parts = line.split(",");

          TestingSQLPerson person = new TestingSQLPerson();
          person.setName(parts[0]);
          person.setAge(Integer.parseInt(parts[1].trim()));
          person.setTestL(Long.parseLong(parts[2].trim()));

          return person;
        }
      });

    // Apply a schema to an RDD of JavaBeans and register it as a table.
    DataFrame schemaPeople = sqlContext.createDataFrame(people, TestingSQLPerson.class);
    schemaPeople.registerTempTable("TestingSQLPerson");

    schemaPeople.printSchema();
    schemaPeople.show();


}

}

所有上述工作,最后在驱动程序控制台上,我可以看到结果,没有任何Exception错误。 @Yukti - 在您的情况下,它也应该工作,只要您按照上面示例中定义的相同步骤操作。如果有任何偏差,请告诉我,我可以尝试帮助你。