为什么Spark任务看起来像按顺序运行

时间:2019-08-05 15:09:00

标签: apache-spark apache-spark-standalone

这就是我正在逐步进行的事情:

  • 加载分区为32分钟的文件
  • 执行一些操作(映射,..,从rdd创建数据集,在数据集中使用sql)
  • 将结果另存为实木复合地板文件

我的问题是,当我检查spark UI时,发现有32个任务正在运行我的工作,其中8个正在运行(因为我的计算机中有8个VCPU),但似乎只有其中一个进度,另一个正在等待,即使对于结果我只为拳头说话而等待,我应该等待下一个任务的结果,依此类推。

在Spark UI中,结果如下:

tasks time line

execution time in the beginning

execution time at the end

因此,您可以看到开始和结束之间存在很大差异,第二个任务的执行时间等于(t1执行+ t2执行)

Ps:我正在独立使用spark,这是用于spark上下文和spark会话的配置

SparkConf conf = new SparkConf()
                .setAppName(appName)
                .set("spark.sql.shuffle.partitions", "8")
                .set("spark.default.parallelism", "24")
                .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
                .set("spark.kryoserializer.buffer.max", "256m")
                .setMaster(sparkMaster);

        System.setProperty("hadoop.home.dir", "c:\\winutils\\");    

        //Create Spark Context from configuration
        spContext = new JavaSparkContext(conf);

        // Parquet
        spContext.hadoopConfiguration().set("parquet.enable.dictionary", "false");
        spContext.hadoopConfiguration().set("parquet.enable.summary-metadata", "false");

        sparkSession = SparkSession
                  .builder()
                  .appName(appName)
                  .master(sparkMaster)
                  .config("spark.sql.warehouse.dir", tempDir)
                  .getOrCreate();

DAG:

DAG execution plan

这是我的代码:

public ErrorAccumulationResult controlStructure(Structure struct, String MappingTablesDirectory, String mode, Long packageId) throws IOException, AnalysisException {

    System.out.println("==============================| START CONTROL PROCESS |==============================");

   //some initialization...

    JavaRDD<String> data = sc.textFile(sessionMetaData.getInputPath(), 32);
    JavaPairRDD<String, Long> filtredsDataKV = data.zipWithIndex().filter(t -> indexes.contains(t._2 + 1));

    JavaRDD<Tuple2<String, Long>> partOfDataRdd = sc.parallelize(filtredsDataKV.take(1000));

    List<ColumnDateFormat> patternList = partOfDataRdd.map(iterator -> {
        List<ColumnDateFormat> columnDateFormats = new ArrayList<>();
        String row = iterator._1;
        if (!row.equals("")) {
            String[] arr = row.trim().split(Pattern.quote(sep), -1);
            for (int i = 0; i < cols.size(); i++) {
                String type = cols.get(i).getType();
                String name = cols.get(i).getName();
                int index = cols.get(i).getIndex();

                switch (type) {
                case "Date":
                    Map<String, Long> count = Common.countPossibleDatePatterns(arr[i]);
                    ColumnDateFormat columnDateFormat = new ColumnDateFormat();
                    columnDateFormat.setName(name);
                    columnDateFormat.setIndex(index);
                    columnDateFormat.setPatternsCount(count);
                    columnDateFormats.add(columnDateFormat);
                    break;
                }
            }
        }
        return columnDateFormats;
    }).reduce((colDate1, colDate2) -> {
        for (int i = 0; i < colDate1.size(); i++) {
            Map<String, Long> count1 = colDate1.get(i).getPatternsCount();
            Map<String, Long> count2 = colDate2.get(i).getPatternsCount();
            Map<String, Long> count = new HashMap<>();
            count1.forEach((pattern, value) -> {
                count.put(pattern, value + count2.get(pattern));
            });
            colDate1.get(i).setPatternsCount(count);
        }
        return colDate1;
    });

    patternList.forEach(p -> {
        Map<String, Long> count = p.getPatternsCount();
        String pattern = count.entrySet().stream().sorted(Collections.reverseOrder(Map.Entry.comparingByValue()))
                .findFirst().get().getKey();
        p.setPattern(pattern);
    });


    ErrorsAccumulator acc = new ErrorsAccumulator();
    ErrorReportingAccumulator accReporting = new ErrorReportingAccumulator();

    sc.sc().register(acc);
    sc.sc().register(accReporting);

    // Paths initialization....


    JavaRDD<org.apache.spark.sql.Row> castColumnsToType = filtredsDataKV.mapPartitions(tuple2Iterator -> {

        List<org.apache.spark.sql.Row> results = new ArrayList<>();

        while (tuple2Iterator.hasNext()) {

            String[] rowAfterConverting = new String[cols.size()];

            Tuple2<String, Long> iterator = tuple2Iterator.next();

            String row = iterator._1;
            Long index = iterator._2 + 1;

            if (!row.equals("")) {

                String[] arr = row.trim().split(Pattern.quote(sep), -1);

                try {
                    for (int i = 0; i < cols.size(); i++) {

                        boolean mandatory = cols.get(i).isMandatory();
                        String type = cols.get(i).getType();
                        String mapTabName = cols.get(i).getMapTabName();
                        String mapColName = cols.get(i).getMapColName();
                        String coe = cols.get(i).getCoe();
                        String decision = cols.get(i).getDecision();
                        String name = cols.get(i).getName();


                        switch (type) {
                            case "Date":
                                if (arr[i].equals("") && mandatory) {
                                    acc.add(new Error("Missing value", index, name, arr[i], decision));
                                    accReporting.add(struct.getName() + ";" + name + ";" + "Missing value");
                                    rowAfterConverting[i] = "";
                                } else {
                                    String pattern = "dd/MM/yyyy";
                                    for (ColumnDateFormat columnDateFormat : patternList) {
                                        if (columnDateFormat.getIndex() != i) {
                                            continue;
                                        }
                                        pattern = columnDateFormat.getPattern();
                                    }
                                    String colConvertedToDate = Common.convertDate(arr[i], pattern);
                                    if (colConvertedToDate.equals("null")) {
                                        acc.add(new Error("Data type mismatch", index, name, arr[i], decision));
                                        accReporting.add(struct.getName() + ";" + name + ";" + "Data type mismatch");
                                    }
                                    rowAfterConverting[i] = colConvertedToDate;
                                }
                                break;
                            case "Decimal":
                                //same for decimal 
                                break;
                            case "Integer":
                                 //same for integer 
                                break;
                            default:

                                break;
                        }

                        if (mapTabName != null && mapColName != null && coe != null) {

                           //Other control for user types 
                        }
                    }

                } catch (ArrayIndexOutOfBoundsException e) {
                    System.out.println("Element don't exist ! error " + e.getMessage());
                }
            } else {
                logger.warn("Blank row");
            }

            results.add(RowFactory.create(rowAfterConverting));
        }
        return results.iterator();
    });


    SparkUtils.csvToParquet(outputFilePathPARQUET, struct, castColumnsToType);

    // Accumulation  manipulation 

    System.out.println("==============================| DETECTION PROCESS TAKE " + (duration/1000000000) + " |==============================");
    return new ErrorAccumulationResult(acc.value(), errorReporting);
}

将结果保存为实木复合地板文件的功能

    public static void csvToParquet(String output, Structure structure, JavaRDD<Row> castColumnsToType)
        throws AnalysisException {

    List<Column> columns = structure.getColumns();

    StructType structType = buildStructureField(columns);

    Dataset<Row> result = ss.createDataFrame(castColumnsToType, structType);

    Dataset<Row> outputDS = result.selectExpr(buildCastQuery(structure.getColumns()));

    outputDS.write().mode(SaveMode.Overwrite).parquet(output);
}

0 个答案:

没有答案