Spark randomSplit-每次运行结果不一致

时间:2018-06-22 00:14:20

标签: apache-spark

我正在尝试使用来将数据集分为训练和非训练

inDataSet.randomSplit(weights.toArray, 0)

对于每次运行,我得到不同的结果。这是预期的吗?如果是这样,我如何每次都能获得相同百分比的行?

例如:培训优惠的随机分配权重为:ArrayBuffer(0.3,0.7)-为此,我总共有72行,权重0.3,预计约为21行。有时我会得到23、29、19、4。请指导。

注意:我给出的总权重为1.0(0.3 + 0.7),无法归一化。

-另一个问题很有用,但这在一次执行中。我要进行N次测试,每次得到不同的结果集。

2 个答案:

答案 0 :(得分:0)

我输入了一种可能的实现方式(类似于第二条评论中的链接):

    def doTrainingOffer(inDataSet: Dataset[Row],
                      fieldName: String,
                      training_offer_list: List[(Long, Long, Int, String, String)]):
  (Dataset[Row], Option[Dataset[Row]]) = {
    println("Doing Training Offer!")

    val randomDs = inDataSet
              .withColumn("row_id", rank().over(Window.partitionBy().orderBy(fieldName)))
              .orderBy(rand)

    randomDs.cache()
    val count = randomDs.count()
    println(s"The total no of rows for this use case is: ${count}")

    val trainedDatasets = new mutable.ArrayBuffer[Dataset[Row]]()
    var startPos = 0l
    var endPos = 0l
    for (trainingOffer <- training_offer_list) {
      val noOfRows = scala.math.round(count * trainingOffer._3 / 100.0)
      endPos += noOfRows
      println(s"for training offer id: ${trainingOffer._1} and percent of ${trainingOffer._3}, the start and end are ${startPos}, ${endPos}")
      trainedDatasets += addTrainingData(randomDs.where(col("row_id") > startPos && col("row_id") <= endPos), trainingOffer)
      startPos = endPos
    }

    val combinedDs = trainedDatasets.reduce(_ union _)
    // (left over for other offer, trainedOffer)
    (randomDs.join(combinedDs, Seq(field_name), "left_anti"), Option(combinedDs))
  }

另一个可能的实现:

val randomDs = inDataSet.orderBy(rand)
    randomDs.cache()
    val count = randomDs.count()
    println(s"The total no of rows for this use case is: ${count}")
    val trainedDatasets = new mutable.ArrayBuffer[Dataset[Row]]()

    for (trainingOffer <- training_offer_list) {
      if (trainedDatasets.length > 1) {
        val combinedDs = trainedDatasets.reduce(_ union _)
        val remainderDs = randomDs.join(combinedDs, Seq(field_name), "left_anti")
        trainedDatasets += addTrainingData(remainderDs.limit(scala.math.round(count * trainingOffer._3 / 100)), trainingOffer)
      }
      else if (trainedDatasets.length == 1) {
        val remainderDs = randomDs.join(trainedDatasets(0), Seq(field_name), "left_anti")
        trainedDatasets += addTrainingData(remainderDs.limit(scala.math.round(count * trainingOffer._3 / 100)), trainingOffer)
      }
      else {
        val tDs = randomDs.limit(scala.math.round(count * trainingOffer._3 / 100))
        trainedDatasets += addTrainingData(tDs, trainingOffer)
      }
    }

    val combinedDs = trainedDatasets.reduce(_ union _)
    // (left over for other offer, trainedOffer)
    (randomDs.join(combinedDs, Seq(field_name), "left_anti"), Option(combinedDs))

答案 1 :(得分:0)

如果使用参数seed = 1234,则可以产生一致的结果。使用dataframe.cache()函数也可以正常工作。