我正在Watson Studio中使用Python 3.5和Spark笔记本。
我正在尝试将spark数据帧导出到云对象存储,但它始终失败:
笔记本电脑未显示错误。我设法导出了较小的数据框而没有问题。
当我检查对象存储时,其中存在部分数据帧。
我导出了以下代码:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
from ingest.Connectors import Connectors
S3saveoptions = {
Connectors.BluemixCloudObjectStorage.URL : paid_credentials['endpoint'],
Connectors.BluemixCloudObjectStorage.IAM_URL : paid_credentials['iam_url'],
Connectors.BluemixCloudObjectStorage.RESOURCE_INSTANCE_ID : paid_credentials['resource_instance_id'],
Connectors.BluemixCloudObjectStorage.API_KEY : paid_credentials['api_key'],
Connectors.BluemixCloudObjectStorage.TARGET_BUCKET : paid_bucket,
Connectors.BluemixCloudObjectStorage.TARGET_FILE_NAME : "name.csv",
Connectors.BluemixCloudObjectStorage.TARGET_WRITE_MODE : "write",
Connectors.BluemixCloudObjectStorage.TARGET_FILE_FORMAT : "csv",
Connectors.BluemixCloudObjectStorage.TARGET_FIRST_LINE_HEADER : "true"}
name = df.write.format('com.ibm.spark.discover').options(**S3saveoptions).save()