在spark 1.6.2中,我能够通过一个非常简单的方式阅读当地的镶木地板文件:
SQLContext sqlContext = new SQLContext(new SparkContext("local[*]", "Java Spark SQL Example"));
DataFrame parquet = sqlContext.read().parquet("file:///C:/files/myfile.csv.parquet");
parquet.show(20);
我尝试升级到Spark 2.0.0并通过运行来实现相同目标:
SparkSession spark = SparkSession.builder().appName("Java Spark SQL Example").master("local[*]").getOrCreate();
Dataset<Row> parquet = spark.read().parquet("file:///C:/files/myfile.csv.parquet");
parquet.show(20);
这是在Windows上运行的,来自intellij(Java项目),我目前没有使用hadoop集群(它将在稍后出现,但目前我只是试图获得正确的处理逻辑和熟悉API。)
不幸的是,当使用spark 2.0运行时,代码会出现异常:
Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:C:/[my intellij project path]/spark-warehouse
at org.apache.hadoop.fs.Path.initialize(Path.java:206)
at org.apache.hadoop.fs.Path.<init>(Path.java:172)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeQualifiedPath(SessionCatalog.scala:114)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:145)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:89)
at org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:95)
at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:95)
at org.apache.spark.sql.internal.SessionState$$anon$1.<init>(SessionState.scala:112)
at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:112)
at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:111)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:382)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:143)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:427)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:411)
at lili.spark.ParquetTest.main(ParquetTest.java:15)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: file:C:/[my intellij project path]/spark-warehouse
at java.net.URI.checkPath(URI.java:1823)
at java.net.URI.<init>(URI.java:745)
at org.apache.hadoop.fs.Path.initialize(Path.java:203)
... 21 more
我不知道为什么它试图触摸我项目目录中的任何内容 - 我是否缺少任何在Spark 1.6.2中明显默认的配置但是在2.0中不再是这种情况?换句话说,在Windows上的spark 2.0中读取本地镶木地板文件的最简单方法是什么?
答案 0 :(得分:2)
您好像遇到了SPARK-15893。 Spark devs将文件读数从1.6.2更改为2.0.0。 根据JIRA上的注释,您应该转到conf \ spark-defaults.conf文件并添加:
"spark.sql.warehouse.dir=file:///C:/Experiment/spark-2.0.0-bin-without-hadoop/spark-warehouse"
然后您应该能够像这样加载镶木地板文件:
DataFrame parquet = sqlContext.read().parquet("C:/files/myfile.csv.parquet");