Java POJO上的groupBy发生Spark数据集错误

时间:2017-05-23 14:06:22

标签: java apache-spark spark-dataframe

我有一组数据,不是Apache-spark可以使用的任何格式。 我为这样的数据创建了一个类,即

public class TBHits {

    int status;
    int trkID;

    public TBHits(int trkID, int status) {
        this.status = status;

        this.trkID = trkID;
    }

    public int getStatus() {
        return status;
    }

    public void setStatus(int status) {
        this.status = status;
    }



    public int getTrkID() {
        return trkID;
    }

    public void setTrkID(int trkID) {
        this.trkID = trkID;
    }

}

在处理数据的脚本中,我创建了一个List

private List<TBHits> deptList = new ArrayList<TBHits>();

处理数据时,我创建TBHits对象并将其添加到List

...
...     
TBHits tbHits = new TBHits((bnkHits.getInt("trkID", i)), (bnkHits.getInt("status", i)));
tbHitList.add(tbHits);
...

在处理之后,我创建了DataSet并执行基本显示和基本过滤器

Dataset<Row> tbHitDf = spSession.createDataFrame(tbHitList, TBHits.class);
tbHitDf.show();
deptDf.filter(deptDf.col("trkID").gt(0)).show();

一切都好。

+------+-----+
|status|trkID|
+------+-----+
|     1|    0|
|     1|    0|
...
...

+------+-----+
|status|trkID|
+------+-----+
|     1|    1|
|     1|    1|
|     1|    1|

...
...

当我尝试使用groupBy并计算

tbHitDf.groupBy("trkID").count().show();

,我得到一个不可理解的错误

Exception in thread "main" java.lang.StackOverflowError
    at java.io.ObjectStreamClass$WeakClassKey.<init>(ObjectStreamClass.java:2307)
    at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:322)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1134)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
...
...
...

但如果我手动插入数据

TBHits tb1 = new TBHits(1, 1);
TBHits tb2 = new TBHits(1, 2);
tbHitList.add(tb1);
tbHitList.add(tb2);

然后groupBy函数正常工作。 我不明白为什么。

1 个答案:

答案 0 :(得分:1)

面向未来的用户。解决方案是使用编码器,即

Encoder<TBHits> TBHitsEncoder = Encoders.bean(TBHits.class);
Dataset<TBHits> tbHitDf = spSession.createDataset(tbHitList, TBHitsEncoder);