spark ml vs mllib中的多项Logistic回归

时间:2016-05-28 16:11:02

标签: apache-spark machine-learning

Spark Version 2.0.0的目标是在ml和现已弃用的mllib软件包之间实现功能奇偶校验。

目前,ml包提供ElasticNet支持,但仅提供二进制回归。为了获得多项式,显然我们必须接受使用已弃用的mllib?

使用mllib的缺点:

  • 不推荐使用。因此,我们将“为什么你要使用旧东西”问题进行实地调查
  • 他们不使用ml工作流程,因此无法完全集成
  • 由于上述原因,我们最终必须重写。

是否有可用方法来实现ml包的一对一多项式?

1 个答案:

答案 0 :(得分:6)

这是一个正在进行中的答案。 OneVsRest中的spark.ml分类器 显然,该方法是将LogisticRegressionClassifier作为二进制分类器提供给它 - 它将在所有类中运行二进制版本并返回具有最高分数的类。

更新以响应@ zero323。以下是Xiangrui Meng关于弃用mllib的信息:

  

将基于RDD的MLlib API切换到Spark 2.0中的维护模式

Hi all,

More than a year ago, in Spark 1.2 we introduced the ML pipeline API built on top of Spark SQL’s DataFrames. Since then the new DataFrame-based API has been developed under the spark.ml package, while the old RDD-based API has been developed in parallel under the spark.mllib package. While it was easier to implement and experiment with new APIs under a new package, it became harder and harder to maintain as both packages grew bigger and bigger. And new users are often confused by having two sets of APIs with overlapped functions.

We started to recommend the DataFrame-based API over the RDD-based API in Spark 1.5 for its versatility and flexibility, and we saw the development and the usage gradually shifting to the DataFrame-based API. Just counting the lines of Scala code, from 1.5 to the current master we added ~10000 lines to the DataFrame-based API while ~700 to the RDD-based API. So, to gather more resources on the development of the DataFrame-based API and to help users migrate over sooner, I want to propose switching RDD-based MLlib APIs to maintenance mode in Spark 2.0. What does it mean exactly?

* We do not accept new features in the RDD-based spark.mllib package, unless they block implementing new features in the DataFrame-based spark.ml package.
* We still accept bug fixes in the RDD-based API.
* We will add more features to the DataFrame-based API in the 2.x series to reach feature parity with the RDD-based API.
* Once we reach feature parity (possibly in Spark 2.2), we will deprecate the RDD-based API.
* We will remove the RDD-based API from the main Spark repo in Spark 3.0.

Though the RDD-based API is already in de facto maintenance mode, this announcement will make it clear and hence important to both MLlib developers and users. So we’d greatly appreciate your feedback!

(As a side note, people sometimes use “Spark ML” to refer to the DataFrame-based API or even the entire MLlib component. This also causes confusion. To be clear, “Spark ML” is not an official name and there are no plans to rename MLlib to “Spark ML” at this time.)

Best,
Xiangrui

另一次更新有一个 JIRA ,截至2016年5月,工作已接近完成Support multiclass logistic regression in spark.ml