Spark每次执行两次动作

时间:2015-11-17 15:24:45

标签: java apache-spark spark-cassandra-connector

我创建了一个简单的Java应用程序,它使用Apache Spark从Cassandra中检索数据,对其进行一些转换并将其保存在另一个Cassandra表中。

我正在使用在独立群集模式下配置的Apache Spark 1.4.1,其中包含位于我的计算机上的单个主服务器和从服务器。

DataFrame customers = sqlContext.cassandraSql("SELECT email, first_name, last_name FROM customer " +
    "WHERE CAST(store_id as string) = '" + storeId + "'");

DataFrame customersWhoOrderedTheProduct = sqlContext.cassandraSql("SELECT email FROM customer_bought_product " +
    "WHERE CAST(store_id as string) = '" + storeId + "' AND product_id = " + productId + "");

// We need only the customers who did not order the product
// We cache the DataFrame because we use it twice.
DataFrame customersWhoHaventOrderedTheProduct = customers
    .join(customersWhoOrderedTheProduct
    .select(customersWhoOrderedTheProduct.col("email")), customers.col("email").equalTo(customersWhoOrderedTheProduct.col("email")), "leftouter")
    .where(customersWhoOrderedTheProduct.col("email").isNull())
    .drop(customersWhoOrderedTheProduct.col("email"))
    .cache();

int numberOfCustomers = (int) customersWhoHaventOrderedTheProduct.count();

Date reportTime = new Date();

// Prepare the Broadcast values. They are used in the map below.
Broadcast<String> bStoreId = sparkContext.broadcast(storeId, classTag(String.class));
Broadcast<String> bReportName = sparkContext.broadcast(MessageBrokerQueue.report_did_not_buy_product.toString(), classTag(String.class));
Broadcast<java.sql.Timestamp> bReportTime = sparkContext.broadcast(new java.sql.Timestamp(reportTime.getTime()), classTag(java.sql.Timestamp.class));
Broadcast<Integer> bNumberOfCustomers = sparkContext.broadcast(numberOfCustomers, classTag(Integer.class));

// Map the customers to a custom class, thus adding new properties.
DataFrame storeCustomerReport = sqlContext.createDataFrame(customersWhoHaventOrderedTheProduct.toJavaRDD()
    .map(row -> new StoreCustomerReport(bStoreId.value(), bReportName.getValue(), bReportTime.getValue(), bNumberOfCustomers.getValue(), row.getString(0), row.getString(1), row.getString(2))), StoreCustomerReport.class);


// Save the DataFrame to cassandra
storeCustomerReport.write().mode(SaveMode.Append)
    .option("keyspace", "my_keyspace")
    .option("table", "my_report")
    .format("org.apache.spark.sql.cassandra")
    .save();

正如您可以看到我cache customersWhoHaventOrderedTheProduct数据框,然后执行count并致电toJavaRDD

根据我的计算,这些动作只应执行一次。但是当我进入当前工作的Spark UI时,我会看到以下几个阶段: enter image description here

正如您所看到的,每个动作都会执行两次。

我做错了吗?是否有任何我错过的环境?

非常感谢任何想法。

修改

我致电System.out.println(storeCustomerReport.toJavaRDD().toDebugString());

之后

这是调试字符串:

(200) MapPartitionsRDD[43] at toJavaRDD at DidNotBuyProductReport.java:93 []
  |   MapPartitionsRDD[42] at createDataFrame at DidNotBuyProductReport.java:89 []
  |   MapPartitionsRDD[41] at map at DidNotBuyProductReport.java:90 []
  |   MapPartitionsRDD[40] at toJavaRDD at DidNotBuyProductReport.java:89 []
  |   MapPartitionsRDD[39] at toJavaRDD at DidNotBuyProductReport.java:89 []
  |   MapPartitionsRDD[38] at toJavaRDD at DidNotBuyProductReport.java:89 []
  |   ZippedPartitionsRDD2[37] at toJavaRDD at DidNotBuyProductReport.java:89 []
  |   MapPartitionsRDD[31] at toJavaRDD at DidNotBuyProductReport.java:89 []
  |   ShuffledRDD[30] at toJavaRDD at DidNotBuyProductReport.java:89 []
  +-(2) MapPartitionsRDD[29] at toJavaRDD at DidNotBuyProductReport.java:89 []
     |  MapPartitionsRDD[28] at toJavaRDD at DidNotBuyProductReport.java:89 []
     |  MapPartitionsRDD[27] at toJavaRDD at DidNotBuyProductReport.java:89 []
     |  MapPartitionsRDD[3] at cache at DidNotBuyProductReport.java:76 []
     |  CassandraTableScanRDD[2] at RDD at CassandraRDD.scala:15 []
  |   MapPartitionsRDD[36] at toJavaRDD at DidNotBuyProductReport.java:89 []
  |   ShuffledRDD[35] at toJavaRDD at DidNotBuyProductReport.java:89 []
  +-(2) MapPartitionsRDD[34] at toJavaRDD at DidNotBuyProductReport.java:89 []
     |  MapPartitionsRDD[33] at toJavaRDD at DidNotBuyProductReport.java:89 []
     |  MapPartitionsRDD[32] at toJavaRDD at DidNotBuyProductReport.java:89 []
     |  MapPartitionsRDD[5] at cache at DidNotBuyProductReport.java:76 []
     |  CassandraTableScanRDD[4] at RDD at CassandraRDD.scala:15 []

编辑2:

因此,经过一些研究结合试验和错误,我设法优化了工作。

我从customersWhoHaventOrderedTheProduct创建了一个RDD,并在调用count()操作之前将其缓存。 (我将缓存从DataFrame移到RDD)。

之后,我使用此RDD创建storeCustomerReport DataFrame

JavaRDD<Row> customersWhoHaventOrderedTheProductRdd = customersWhoHaventOrderedTheProduct.javaRDD().cache();

现在这些阶段看起来像这样:

enter image description here

正如您所看到的那样,countcache已经消失了,但仍有两个“javaRDD&#39;动作。我不知道它们来自哪里,因为我在代码中只调用toJavaRDD一次。

1 个答案:

答案 0 :(得分:3)

您似乎在下面的代码段中应用了两个操作

// Map the customers to a custom class, thus adding new properties.
DataFrame storeCustomerReport = sqlContext.createDataFrame(customersWhoHaventOrderedTheProduct.toJavaRDD()
    .map(row -> new StoreCustomerReport(bStoreId.value(), bReportName.getValue(), bReportTime.getValue(), bNumberOfCustomers.getValue(), row.getString(0), row.getString(1), row.getString(2))), StoreCustomerReport.class);


// Save the DataFrame to cassandra
storeCustomerReport.write().mode(SaveMode.Append)
    .option("keyspace", "my_keyspace")

一个位于sqlContext.createDataFrame(),另一个位于storeCustomerReport.write(),这两个都需要customersWhoHaventOrderedTheProduct.toJavaRDD()

坚持生成的RDD应解决此问题。

JavaRDD cachedRdd = customersWhoHaventOrderedTheProduct.toJavaRDD().persist(StorageLevel.DISK_AND_MEMORY) //Or any other storage level