我正在Pyspark中使用多个参数运行线性回归,并且运行良好,但是,如果在scala中运行相同配置的相同代码,则会在拟合模型时出现以下错误。
请提出问题,我将把所有细节放在这里,自上个月以来,我一直被困在这里。
19/12/06 23:59:35错误cluster.YarnScheduler:丢失了bvpr-bdaws09.vq.internal.vodafone.com上的执行程序6:容器因超出内存限制而被YARN杀死。 1.5 GB的1.5 GB物理内存。由于YARN-4714,考虑提高spark.yarn.executor.memoryOverhead或禁用yarn.nodemanager.vmem-check-enabled。
如果我直接使用不带多个参数的线性回归模型进行拟合,那么在scala中还可以做更多事情。
val spark = (SparkSession.builder
.appName("MLib_scala")
.master("local")
.config("spark.driver.memory", "3g")
.config("spark.executor.memory", "20g")
.config("spark.dynamicAllocation.maxExecutors","5")
.config("spark.executor.cores", "3")
.config("spark.yarn.executor.memoryOverhead","2g")
.enableHiveSupport()
.getOrCreate())
val df1 = spark.sql("select * from Table_4kCols_10k_rows") //some 3GB hive table
val list_cols =df1.columns
val featureCols = list_cols.filter(_!= "dormant_flag")
val df2=df1.withColumnRenamed("dormant_flag","label")
val assembler = new VectorAssembler().setInputCols(featureCols).setOutputCol("features")
val df = assembler.transform(df2)
val seed = 5043
val Array(train, test) = df.randomSplit(Array(0.7, 0.3))
val lr = new LogisticRegression().setLabelCol("label").setFeaturesCol("features")
val fittedLR = lr.fit(train) // This works fine
val stages = Array(lr)
val pipeline = new Pipeline().setStages(stages)
val params = new ParamGridBuilder().addGrid(lr.elasticNetParam, Array(0.0, 0.5, 1.0)).addGrid(lr.regParam, Array(0.1, 2.0)).build()
val evaluator = new BinaryClassificationEvaluator().setMetricName("areaUnderROC").setRawPredictionCol("prediction").setLabelCol("label")
val tvs = new TrainValidationSplit().setTrainRatio(0.75).setEstimatorParamMaps(params).setEstimator(pipeline).setEvaluator(evaluator)
val tvsFitted = tvs.fit(train) //This gives an error //pyspark works fine