我一直在使用Data Proc谷歌云服务进行Spark Cluster机器学习建模。我已经成功加载了数据 从Google存储桶中。但是,我不确定如何将熊猫的数据帧和spark数据帧作为csv写入云存储桶。
当我使用以下命令时,它给我一个错误
df.to_csv("gs://mybucket/")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/lib/python3.6/site-packages/pandas/core/frame.py", line 1745, in to_csv
formatter.save()
File "/opt/conda/lib/python3.6/site-packages/pandas/io/formats/csvs.py", line 156, in save
compression=self.compression)
File "/opt/conda/lib/python3.6/site-packages/pandas/io/common.py", line 400, in _get_handle
f = open(path_or_buf, mode, encoding=encoding)
FileNotFoundError: [Errno 2] No such file or directory: 'gs://dataproc-78f5e64b-a26d-4fe4-bcf9-e1b894db9d8f-au-southeast1/trademe_xmas.csv'
FileNotFoundError: [Errno 2] No such file or directory: 'gs://mybucket/'
但是以下命令可以工作,但是我不确定它在哪里保存文件
df.to_csv("data.csv")
我也关注了以下文章,并给出了以下错误 Write a Pandas DataFrame to Google Cloud Storage or BigQuery
import google.datalab.storage as storage
ModuleNotFoundError: No module named 'google.datalab'
对于Google Cloud Data Proc和Spark来说我还比较陌生,我希望有人能帮助我了解如何将输出的熊猫数据帧保存到gcloud存储桶中
先谢谢了!!
########针对Igor的要求from pyspark.ml.classification import RandomForestClassifier as RF
rf = RF(labelCol='label', featuresCol='features',numTrees=200)
fit = rf.fit(trainingData)
transformed = fit.transform(testData)
from pyspark.mllib.evaluation import BinaryClassificationMetrics as metric
results = transformed.select(['probability', 'label'])
#Decile Creation for the Output
test = results.toPandas()
test['X0'] = test.probability.str[0]
test['X1'] = test.probability.str[1]
test = test.drop(columns=['probability'])
test = test.sort_values(by='X1', ascending=False)
test['rownum'] = test.reset_index().index
x = round(test['rownum'].count() / 10)
test['rank'] = (test.rownum - 1)//x + 1
答案 0 :(得分:2)
最简单的方法是将Pandas DataFrame转换为Spark DataFrame并将其写入GCS。
有关如何执行此操作的说明,https://stackoverflow.com/a/45495969/3227693