pyspark线性回归模型给出错误此列名必须是数字类型,但实际上是字符串类型

时间:2018-03-08 09:36:17

标签: python apache-spark pyspark

我在pyspark中制作了一个多名义回归模型,在运行我的线性回归模型后,它给了我这个错误         " IllegalArgumentException:u'要求失败:列标签必须是NumericType类型,但实际上是StringType类型。"

请帮助我,因为我花了很多时间来解决这个问题,但无法解决这个问题。

    lr_data=   loan_data.select('int_rate','loan_amnt','term','grade','sub_grade','emp_length','verification_status','home_ownership','annual_inc','purpose','addr_state','open_acc') 
    lr_data.printSchema()

    root
    |-- int_rate: string (nullable = true)
    |-- loan_amnt: integer (nullable = true)
    |-- term: string (nullable = true)
    |-- grade: string (nullable = true)
    |-- sub_grade: string (nullable = true)
    |-- emp_length: string (nullable = true)
    |-- verification_status: string (nullable = true)
    |-- home_ownership: string (nullable = true)
    |-- annual_inc: double (nullable = true)
    |-- purpose: string (nullable = true)
    |-- addr_state: string (nullable = true)
    |-- open_acc: string (nullable = true)

这里在multinominol回归模型中,我的目标变量应该是int_rate(这是字符串类型,可能是我在运行时遇到此错误的原因)。

但最初我尝试在回归模型中只使用两个值为int_rate,loan_amnt。

这是代码

    input_data=lr_data.rdd.map(lambda x:(x[0], DenseVector(x[1:2])))
    data3= spark.createDataFrame(input_data,["label","features",])
    data3.printSchema()

   root
   |-- label: string (nullable = true)
   |-- features: vector (nullable = true)

IMP:注意我尝试在DenseVector数组中使用其他变量,但它给我留下了很长的错误,比如浮点数()的invalide literal:36个月

   usr/local/spark/python/pyspark/sql/session.pyc in createDataFrame(self,    data, schema, samplingRatio, verifySchema)
    580 
    581         if isinstance(data, RDD):
    582  rdd, schema = self._createFromRDD(data.map(prepare), schema, samplingRatio)
   583         else:
   584             rdd, schema = self._createFromLocal(map(prepare, data), schema)
    if schema is None or isinstance(schema, (list, tuple)):
    380             struct = self._inferSchema(rdd, samplingRatio)
    381             converter = _create_converter(struct)
    382             rdd = rdd.map(converter)

   /usr/local/spark/python/pyspark/sql/session.pyc in _inferSchema(self,   rdd, samplingRatio)
    349         :return: :class:`pyspark.sql.types.StructType`
    350         """
    351         first = rdd.first()
    352         if not first:
    353             raise ValueError("The first row in RDD is empty, "

请告诉我如何在此回归模型中选择2个以上的变量。我想我必须对数据集中的每个变量进行类型转换。

   #spilt into two partition 
   train_data, test_data = data3.randomSplit([.7,.3], seed = 1)
   lr = LinearRegression(labelCol="label", maxIter=100, regParam= 0.3, elasticNetParam = 0.8)
   linearModel = lr.fit(train_data)

现在,当我运行此linearmodel()时,我收到以下错误。

    IllegalArgumentException Traceback (most recent call  last)
   <ipython-input-20-5f84d575334f> in <module>()

----&GT; 1 linearModel = lr.fit(train_data)

     /usr/local/spark/python/pyspark/ml/base.pyc in fit(self,dataset,params) 
      62                 return self.copy(params)._fit(dataset)
      63             else:
      64                 return self._fit(dataset)
      65         else:
      66             raise ValueError("Params must be either a param map  or a list/tuple of param maps, "

      /usr/local/spark/python/pyspark/ml/wrapper.pyc in _fit(self, dataset)
      263 
      264     def _fit(self, dataset):
      265         java_model = self._fit_java(dataset)
      266         return self._create_model(java_model)
      267 

      /usr/local/spark/python/pyspark/ml/wrapper.pyc in _fit_java(self, dataset)
        260         """
        261         self._transfer_params_to_java()
        262         return self._java_obj.fit(dataset._jdf)
        263 
        264     def _fit(self, dataset):

       /usr/local/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args)
        1131         answer = self.gateway_client.send_command(command)
        1132         return_value = get_return_value(
        1133             answer, self.gateway_client, self.target_id, self.name)

1134    1135对于temp_args中的temp_arg:

       /usr/local/spark/python/pyspark/sql/utils.pyc in deco(*a, **kw)
        77                 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
        78             if  s.startswith('java.lang.IllegalArgumentException: '):

---&GT; 79引发IllegalArgumentException(s.split(&#39;:&#39;,1)[1],stackTrace)             80提高             81返回装饰

        IllegalArgumentException: u'requirement failed: Column label must be of type NumericType but was actually of type StringType.'

请帮助我,我已经尝试了将字符串值转换为数字的每种方法,但没有任何区别。因为我的int_rate是目标变量,是deafult的字符串类型,但它取值numeric.one更多是我必须在我的回归模型中选择整个lr数据集。我怎样才能做到这一点。 在此先感谢:)

1 个答案:

答案 0 :(得分:0)

试试这个,

from pyspark.ml.linalg import Vectors
from pyspark.ml.regression import LinearRegression
from pyspark.sql.types import *
import pyspark.sql.functions as F

cols = lr_data.columns
input_data = lr_data.rdd.map(lambda x:(x['int_rate'], Vectors.dense([x[col] for col in cols if col != 'int_rate'])))\
                        .toDF(["label","features",])\
                        .select([F.col('label').cast(FloatType()).alias('label'), 'features'])

train_data, test_data = input_data.randomSplit([.7,.3], seed = 1)

lr = LinearRegression(labelCol="label", maxIter=100, regParam= 0.3, elasticNetParam = 0.8)
linearModel = lr.fit(train_data)

如果所有列都可以转换为数字类型,则可以使用此选项。