从csv到javaRDD的转换错误

时间:2014-11-03 16:19:22

标签: csv apache-spark

我不知道为什么我一直得到用于在JavaRDD中存储csv数据的noSuchMethodError。我定义了以下类,其实例将是csv文件中的记录。

public class Historical_Data_Record implements Serializable {
    String tripduration;
    String starttime;
    String stoptime;
    String start_station_id;
    String start_station_name;
    long start_station_latitude;
    long start_station_longitude;
    String stop_station_id;
    String stop_station_name;
    long stop_station_latitude;
    long stop_station_longitude;
    String bikeid;
    String usertype;
    String birth_year;
    int gender; 
    // if 1, male, if 0, female
}

然后我有以下代码通过读取csv中的数据并在JavaRDD中存储来创建Historical_Data_Record对象。

public static final JavaRDD<Historical_Data_Record> get_Historical_Data(JavaSparkContext sc, String filename){
    // get the data using the configuration parameters 
    final JavaRDD<Historical_Data_Record> rdd_records = sc.textFile(filename).map(
        new Function<String, Historical_Data_Record>() {
            private static final long serialVersionUID = 1L;

            public Historical_Data_Record call(String line) throws Exception {
                String[] fields = line.split(",");

                Historical_Data_Record sd = new Historical_Data_Record();           
                sd.tripduration = fields[0];
                sd.starttime = fields[1];
                sd.stoptime = fields[2];
                sd.start_station_id = fields[3];
                sd.start_station_name = fields[4];
                sd.start_station_latitude = Long.valueOf(fields[5]).longValue();
                sd.start_station_longitude = Long.valueOf(fields[6]).longValue();
                sd.stop_station_id = fields[7]; 
                sd.stop_station_name = fields[8];
                sd.stop_station_latitude = Long.valueOf(fields[9]).longValue();
                sd.stop_station_longitude = Long.valueOf(fields[10]).longValue();
                sd.bikeid = fields[11];
                sd.usertype = fields[12];
                sd.birth_year = fields[13];
                sd.gender = Integer.parseInt(fields[14]);
                return sd;
    }});

    return rdd_records;

}

但是当我运行下面的代码时,

    JavaRDD<Historical_Data_Record> aData = Spark.get_Historical_Data(sc, filename);

其中sc是SparkContext,filename只是包含文件路径的字符串。错误如下:

2014-11-03 11:04:42.959 java[5856:1b03] Unable to load realm info from SCDynamicStore
14/11/03 11:04:43 WARN storage.BlockManager: Putting block broadcast_0 failed
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
    at org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
    at org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
    at org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
    at org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
    at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
    at org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
    at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)
    at org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)
    at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155)
    at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:75)
    at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:661)
    at org.apache.spark.storage.BlockManager.put(BlockManager.scala:546)
    at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:812)
    at org.apache.spark.broadcast.HttpBroadcast.<init>(HttpBroadcast.scala:52)
    at org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:35)
    at org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:29)
    at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
    at org.apache.spark.SparkContext.broadcast(SparkContext.scala:776)
    at org.apache.spark.SparkContext.hadoopFile(SparkContext.scala:545)
    at org.apache.spark.SparkContext.textFile(SparkContext.scala:457)
    at org.apache.spark.api.java.JavaSparkContext.textFile(JavaSparkContext.scala:164)
    at com.big_data.citibike_project.Spark.get_Historical_Data(Spark.java:19)
    at com.big_data.citibike_project.Main.main(Main.java:18)

起初,我认为这可能是因为有标题,所以我删除了它。但同样,这也是同样的错误。有人可以帮帮我吗?

1 个答案:

答案 0 :(得分:0)

Spark使用相当古老的番石榴版本(14.0.1),看起来你的一个依赖项带来了更新的不兼容版本。尝试将番石榴版本修复为spark版本。

此人可能会感兴趣 - http://apache-spark-user-list.1001560.n3.nabble.com/Is-Spark-1-1-0-incompatible-with-Hive-td17364.html