spark psv文件到数据帧转换错误

时间:2017-03-23 21:18:37

标签: scala apache-spark spark-dataframe

我使用的火花版本是2.0+ 我所要做的就是将管道(|)分隔值文件读入Dataframe,然后像查询一样运行SQL。我也尝试过逗号分隔文件。 我正在使用火花壳与火花互动 我已经下载了spark-csv jar并使用--packages选项运行spark-shell以将其导入到我的会话中。它已成功导入。

import spark.implicits._
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql._
val session = 
SparkSession.builder().appName("test").master("local").getOrCreate()
    val df = session.read.format("com.databricks.spark.csv").option("header", "true").option("mode", "DROPMALFORMED").load("testdata.txt");

WARN Hive: Failed to access metastore. This class should not accessed in runtime.
apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hi
 at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1236)
 at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:174)
 at org.apache.hadoop.hive.ql.metadata.Hive.<clinit>(Hive.java:166)
 at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
 at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:171)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
 at java.lang.reflect.Constructor.newInstance(Unknown Source)
 at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258)
 at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359)
 at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263)
 at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39)

1 个答案:

答案 0 :(得分:0)

您可以直接将psv文件加载到RDD中,然后根据您的要求将其拆分,然后您可以在其上应用模式。这是java的例子。

gensub(/\042/, "\047","g", [target])

感谢。