Java:Spark迭代自定义对象

时间:2018-06-14 06:54:44

标签: java apache-spark drools

我的代码中包含以下代码:

finalJoined的类型为DataSet<Row>.

RuleParamsRuleOutputParams是带有setter和gettes的java pojo类。

我在下面的代码中调用了 drools规则引擎

List<Row> finalList = finalJoined.collectAsList();
List<RuleOutputParams> ruleOutputParamsList = new ArrayList<RuleOutputParams>();
        Dataset<RuleOutputParams> rulesParamDS = null;
        Iterator<Row> iterator = finalList.iterator();
        while (iterator.hasNext()) {
            Row row = iterator.next();
            RuleParams ruleParams = new RuleParams();
            String outputString = (String) row.get(1);
            // setting up parameters
            System.out.println("Value of TXN DTTM is : " + row.getString(0));
            ruleParams.setTxnDttm(row.getString(0));
            ruleParams.setCisDivision(row.getString(1));
            System.out.println("Division is  : " + ruleParams.getCisDivision());
            ruleParams.setTxnVol(row.getInt(2));
            System.out.println("TXN Volume is  : " + ruleParams.getTxnVol());
            ruleParams.setTxnAmt(row.getInt(3));
            System.out.println("TXN Amount is  : " + ruleParams.getTxnAmt());
            ruleParams.setCurrencyCode(row.getString(4));
            ruleParams.setAcctNumberTypeCode(row.getString(5));
            ruleParams.setAccountNumber(row.getLong(6));
            ruleParams.setUdfChar1(row.getString(7));
            System.out.println("UDF Char1 is : " + ruleParams.getUdfChar1());
            ruleParams.setUdfChar2(row.getString(8));
            ruleParams.setUdfChar3(row.getString(9));
            ruleParams.setAccountId(row.getLong(10));
            kSession.insert(ruleParams);
            int output = kSession.fireAllRules();

            System.out.println("FileAllRules Output" + output);
            System.out.println("After firing  rules");
            System.out.println(ruleParams.getPriceItemParam1());
            System.out.println(ruleParams.getCisDivision());
            // generating output objects depending on the size of priceitems
            // derived.
            System.out.println("No. of priceitems derived : " + ruleParams.getPriceItemCd().size());
            for (int index = 0; index < ruleParams.getPriceItemCd().size(); index++) {

                System.out.println("Inside a for loop");

                RuleOutputParams ruleOutputParams = new RuleOutputParams();

                ruleOutputParams.setTxnDttm(ruleParams.getTxnDttm());
                ruleOutputParams.setCisDivision(ruleParams.getCisDivision());
                ruleOutputParams.setTxnVol(ruleParams.getTxnVol());
                ruleOutputParams.setTxnAmt(ruleParams.getTxnAmt());
                ruleOutputParams.setCurrencyCode(ruleParams.getCurrencyCode());
                ruleOutputParams.setAcctNumberTypeCode(ruleParams.getAcctNumberTypeCode());
                ruleOutputParams.setAccountNumber(ruleParams.getAccountNumber());
                ruleOutputParams.setAccountId(ruleParams.getAccountId());
                ruleOutputParams.setPriceItemCd(ruleParams.getPriceItemCd().get(index));
                System.out.println(ruleOutputParams.getPriceItemCd());
                ruleOutputParams.setPriceItemParam(ruleParams.getPriceItemParams().get(index));
                System.out.println(ruleOutputParams.getPriceItemParam());
                ruleOutputParams.setPriceItemParamCode(ruleParams.getPriceItemParamCodes().get(index));
                ruleOutputParams.setProcessingDate(new SimpleDateFormat("yyyy-MM-dd").format(new Date()));
                ruleOutputParams.setUdfChar1(ruleParams.getUdfChar1());
                ruleOutputParams.setUdfChar2(ruleParams.getUdfChar2());
                ruleOutputParams.setUdfChar3(ruleParams.getUdfChar3());

                ruleOutputParamsList.add(ruleOutputParams);
            }
        }
        System.out.println("Size of ruleOutputParamsList is : " + ruleOutputParamsList.size());
        Encoder<RuleOutputParams> rulesOutputParamEncoder = Encoders.bean(RuleOutputParams.class);
        rulesParamDS = sparkSession.createDataset(Collections.unmodifiableList(ruleOutputParamsList),
                rulesOutputParamEncoder);
        rulesParamDS.show();

我在代码中使用了whilefor循环。

可以使用spark mapflatmapforEach函数重写此代码吗?怎么做?

这里的问题是 Drools规则引擎被称为顺序。我想执行并行

编辑 - 如上面的代码所示,我首先将DataFrame转换为List,然后在其上使用迭代器。我可以直接使用DataFrameRDD作为我的目的吗?

2 个答案:

答案 0 :(得分:1)

一个非常简单的演示,可以显示我的测试中的parallelStreamCompletableFuture

parallelStream

int parallelGet() {
    return IntStream.rangeClosed(0, TOP).parallel().map(i -> getIoBoundNumber(i)).sum();
}

CompletableFuture

int concurrencyGetBasic() {
    List<CompletableFuture<Integer>> futureList = IntStream.rangeClosed(0, TOP).boxed()
            .map(i -> CompletableFuture.supplyAsync(() -> getIoBoundNumber(i)))
            .collect(Collectors.toList());
    return futureList.stream().map(CompletableFuture::join).reduce(0, Integer::sum);
}

有关更多教程,您可以查看Java 8 TutorialJava 8 in Action

答案 1 :(得分:0)

如上所述, finalJoined 已经是一个DataSet,因此无需将其收集到驱动程序中。你可以编写类似于下面的代码:

这是包含数据的基础数据集

DataSet<Row> finalJoined;

创建一个新函数并将每一行传递给它。既然你有 数据已经存在于分区的DataSet中,它将在worker中并行运行。 如果没有,请执行finalJoined.Partitions();

finalJoined.forEach(row -> droolprocess(row));

public void droolprocess(Row row){

[把整个代码放在while循环中(迭代器除外)这里。 它将在工人身上并行执行]

[根据需要传递连接参数或在此处获取新连接]

}

如果要从执行每一行获取返回值,

  • 使用MAP代替FOREACH并为其创建另一个DataSet 进一步使用。将DataSet映射到另一个DataSet。
  • 更改droolprocess函数返回类型。