PySpark类型转换问题从字符串转换为整数

时间:2018-08-28 07:20:35

标签: string pyspark type-conversion int

我想在pyspark中使用ml算法。

  

问题陈述:将ml算法与pyspark一起使用

我的数据集类型为String

但是,要使用该算法,需要类型转换int(label)和float(value)。

这是NaiveBayes示例代码。
参考:https://spark.apache.org/docs/latest/mllib-data-types.html

sc.textFile('/home/kiwoong/spark/bin/data-new/data3')  
rdd1 = data.map(lambda x: x.split('\\t'))  
rdd2 = rdd1.map(lambda x: (int(x[14]), Vectors.dense([float(v) for v in x[:14]])))  
rdd3 = rdd2.map(lambda x: LabeledPoint(x[0], x[1]))  
model = NaiveBayes.train(rdd3) 

我的数据集链接:https://data.boston.gov/dataset/crime-incident-reports-august-2015-to-date-source-new-system/resource/12cb3883-56f5-47de-afa5-3b1cf61b257b

我创建了一个代码来转换类型。

rdd  = sc.textFile('crimeincidentreports.csv') #첨부파일
rdd2 = rdd.map(lambda x: x.split(','))

def gotest(dic, arr):
    return dic[arr]

def make_rdd(inrdd):
    test1 = inrdd 
    test2 = set(test1.collect())
    test3 = { x:i+1  for i,x in enumerate(test2)}
    go1 = test1.map(lambda x: gotest(test3,x))

    return go1

测试了以下内容

tt = make_rdd(rdd2.map(lambda x:x[4]))
tt.take(5)

输出:[11、17、21、17、2]

尝试了以下

rdd3 = rdd2.map(lambda x: (make_rdd(x[i]) for i in range(17)))
rdd3.take(10)

显示以下错误

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-21-c242ce9ecc10> in <module>()
----> 1 te.take(5)

C:\spark-2.3.1-bin-hadoop2.7\python\pyspark\rdd.py in take(self, num)
   1373 
   1374             p = range(partsScanned, min(partsScanned + numPartsToTry, totalParts))
-> 1375             res = self.context.runJob(self, takeUpToNumLeft, p)
   1376 
   1377             items += res

C:\spark-2.3.1-bin-hadoop2.7\python\pyspark\context.py in runJob(self, rdd, partitionFunc, partitions, allowLocal)
   1011         # SparkContext#runJob.
   1012         mappedRDD = rdd.mapPartitions(partitionFunc)
-> 1013         sock_info = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
   1014         return list(_load_from_socket(sock_info, mappedRDD._jrdd_deserializer))
   1015 

C:\spark-2.3.1-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

C:\spark-2.3.1-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

C:\spark-2.3.1-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 15.0 failed 1 times, most recent failure: Lost task 0.0 in stage 15.0 (TID 21, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):

我们怎么解决呢?

谢谢。

0 个答案:

没有答案