将数据帧写入csv和镶木地板时出现SparkR错误

时间:2017-09-22 19:36:11

标签: rstudio sparkr

我在将火花数据框写入csv和镶木地板时遇到错误。我已经尝试安装winutil但仍未解决错误。

我的代码

    INVALID_IMEI <- c("012345678901230","000000000000000")
    setwd("D:/Revas/Jatim Old")
    fileList <- list.files()
    cdrSchema <- structType(structField("date","string"),
                      structField("time","string"),
                      structField("a_number","string"),
                      structField("b_number", "string"),
                      structField("duration","integer"),
                      structField("lac_cid","string"),
                      structField("imei","string"))
    file <- fileList[1]
    filePath <- paste0("D:/Revas/Jatim Old/",file)
    dataset <- read.df(filePath, header="false",source="csv",delimiter="|",schema=cdrSchema)
    dataset <- filter(dataset, ifelse(dataset$imei %in% INVALID_IMEI,FALSE,TRUE))
    dataset <- filter(dataset, ifelse(isnan(dataset$imei),FALSE,TRUE))
    dataset <- filter(dataset, ifelse(isNull(dataset$imei),FALSE,TRUE))

要导出数据帧,请尝试以下代码

    write.df(dataset, "D:/spark/dataset",mode="overwrite")
    write.parquet(dataset, "D:/spark/dataset",mode="overwrite")

我收到以下错误

Error: Error in save : org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:215)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:173)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:173)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:173)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:145)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.comma

1 个答案:

答案 0 :(得分:0)

我已经找到了可能的原因。问题似乎在于winutil版本,以前使用的是2.6。将其更改为2.8似乎可以解决问题