在Spark中过滤自定义数据结构

时间:2019-05-10 18:19:59

标签: java apache-spark hdfs rdd distributed-computing

我正在尝试将csv文件读入JavaRDD。为此,我编写了以下代码:

SparkConf conf = new SparkConf().setAppName("NameOfApp").setMaster("spark://Ip here:7077");
JavaSparkContext sc = new JavaSparkContext(conf);

JavaRDD<CurrencyPair> rdd_records = sc.textFile(System.getProperty("user.dir") + "/data/data.csv", 2).map(
        new Function<String, CurrencyPair>() {
            public CurrencyPair call(String line) throws Exception {
                String[] fields = line.split(",");
                CurrencyPair sd = new CurrencyPair(Integer.parseInt(fields[0].trim()), Double.parseDouble(fields[1].trim()),
                        Double.parseDouble(fields[2].trim()), Double.parseDouble(fields[3]), new Date(fields[4]));
                return sd;
            }
        }
);

我的数据文件如下:

1,0.034968,212285,7457.23,"2019-03-08 18:36:18"

在这里,为了检查我的数据是否正确加载,我尝试打印其中一些:

System.out.println("Count: " + rdd_records.count());
List<CurrencyPair> list = rdd_records.top(5);
System.out.println(list.toString());

但是我在两个系统输出行中都有以下错误。我也单独尝试了它们,而不是同时打印计数和列表。

Caused by: java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.spark.rdd.MapPartitionsRDD.f of type scala.Function3 in instance of org.apache.spark.rdd.MapPartitionsRDD

我的自定义对象如下:

public class CurrencyPair implements Serializable {

private int id;
private double value;
private double baseVolume;
private double quoteVolume;
private Date timeStamp;

public CurrencyPair(int id, double value, double baseVolume, double quoteVolume, Date timeStamp) {
    this.id = id;
    this.value = value;
    this.baseVolume = baseVolume;
    this.quoteVolume = quoteVolume;
    this.timeStamp = timeStamp;
}

public int getId() {
    return id;
}

public void setId(int id) {
    this.id = id;
}

public double getValue() {
    return value;
}

public void setValue(double value) {
    this.value = value;
}

public double getBaseVolume() {
    return baseVolume;
}

public void setBaseVolume(double baseVolume) {
    this.baseVolume = baseVolume;
}

public double getQuoteVolume() {
    return quoteVolume;
}

public void setQuoteVolume(double quoteVolume) {
    this.quoteVolume = quoteVolume;
}

public Date getTimeStamp() {
    return timeStamp;
}

public void setTimeStamp(Date timeStamp) {
    this.timeStamp = timeStamp;
}
}

所以我无法弄清楚这里出了什么问题。我在做什么错了?


编辑:当我写本地而不是自己的spark master IP时,它工作得很好。但是我需要在自己的IP上运行它。那么我的主节点怎么了?

1 个答案:

答案 0 :(得分:0)

问题可能出在匿名类定义new Function<String, CurrencyPair>() {上,它迫使Spark也尝试序列化父类。尝试使用lambda:

rdd_records.map(
  (Function<String, CurrencyPair>) line -> {
    ...

注意:您可以改为以CSV格式读取文件,并使用带有bean编码器的数据集API完全跳过手动解析。