尝试在Databricks社区版平台上使用Spark从url读取数据 我尝试使用spark.read.csv和SparkFiles,但仍然缺少一些简单要点
url = "https://raw.githubusercontent.com/thomaspernet/data_csv_r/master/data/adult.csv"
from pyspark import SparkFiles
spark.sparkContext.addFile(url)
# sc.addFile(url)
# sqlContext = SQLContext(sc)
# df = sqlContext.read.csv(SparkFiles.get("adult.csv"), header=True, inferSchema= True)
df = spark.read.csv(SparkFiles.get("adult.csv"), header=True, inferSchema= True)
与路径有关的错误:
Path does not exist: dbfs:/local_disk0/spark-9f23ed57-133e-41d5-91b2-12555d641961/userFiles-d252b3ba-499c-42c9-be48-96358357fb75/adult.csv;'
我也尝试了其他方法
val content = scala.io.Source.fromURL("https://raw.githubusercontent.com/thomaspernet/data_csv_r/master/data/adult.csv").mkString
# val list = content.split("\n").filter(_ != "")
val rdd = sc.parallelize(content)
val df = rdd.toDF
SyntaxError: invalid syntax
File "<command-332010883169993>", line 16
val content = scala.io.Source.fromURL("https://raw.githubusercontent.com/thomaspernet/data_csv_r/master/data/adult.csv").mkString
^
SyntaxError: invalid syntax
数据应该直接加载到databricks文件夹中,或者我应该能够使用spark.read从URL直接加载,有任何建议
答案 0 :(得分:1)
尝试一下。
url = "https://raw.githubusercontent.com/thomaspernet/data_csv_r/master/data/adult.csv"
from pyspark import SparkFiles
spark.sparkContext.addFile(url)
**df = spark.read.csv("file://"+SparkFiles.get("adult.csv"), header=True, inferSchema= True)**
只需获取csv网址的几列。
df.select("age","workclass","fnlwgt","education").show(10);
>>> df.select("age","workclass","fnlwgt","education").show(10);
+---+----------------+------+---------+
|age| workclass|fnlwgt|education|
+---+----------------+------+---------+
| 39| State-gov| 77516|Bachelors|
| 50|Self-emp-not-inc| 83311|Bachelors|
| 38| Private|215646| HS-grad|
| 53| Private|234721| 11th|
| 28| Private|338409|Bachelors|
| 37| Private|284582| Masters|
| 49| Private|160187| 9th|
| 52|Self-emp-not-inc|209642| HS-grad|
| 31| Private| 45781| Masters|
| 42| Private|159449|Bachelors|
+---+----------------+------+---------+
SparkFiles获取驱动程序或工作程序本地文件的绝对路径。这就是为什么它找不到它的原因。
答案 1 :(得分:1)
以上答案有效,但有时可能容易出错 SparkFiles.get 将返回 null
#1 是从任何 url 或公共 s3 位置获取文件的更突出方式
IOUtils.toString will do the trick see the docs of apache commons io jar 将已经存在于任何 Spark 集群中,无论其数据块还是任何其他 Spark 安装。
以下是执行此操作的 scala 方式...我为这个示例使用了一个原始的 git hub csv 文件...可以根据要求进行更改。
cdefg
结果:
import org.apache.commons.io.IOUtils // jar will be already there in spark cluster no need to worry
import java.net.URL
val urlfile=new URL("https://raw.githubusercontent.com/lrjoshi/webpage/master/public/post/c159s.csv")
val testcsvgit = IOUtils.toString(urlfile,"UTF-8").lines.toList.toDS()
val testcsv = spark
.read.option("header", true)
.option("inferSchema", true)
.csv(testcsvgit)
testcsv.show
+-----------+------+----+----+---+-----+
|Experiment |Virus |Cell| MOI|hpi|Titer|
+-----------+------+----+----+---+-----+
| EXP I| C159S|OFTu| 0.1| 0| 4.75|
| EXP I| C159S|OFTu| 0.1| 6| 2.75|
| EXP I| C159S|OFTu| 0.1| 12| 2.75|
| EXP I| C159S|OFTu| 0.1| 24| 5.0|
| EXP I| C159S|OFTu| 0.1| 48| 5.5|
| EXP I| C159S|OFTu| 0.1| 72| 7.0|
| EXP I| C159S| STU| 0.1| 0| 4.75|
| EXP I| C159S| STU| 0.1| 6| 3.75|
| EXP I| C159S| STU| 0.1| 12| 4.0|
| EXP I| C159S| STU| 0.1| 24| 3.75|
| EXP I| C159S| STU| 0.1| 48| 3.25|
| EXP I| C159S| STU| 0.1| 72| 3.25|
| EXP I| C159S|OFTu|10.0| 0| 6.5|
| EXP I| C159S|OFTu|10.0| 6| 4.75|
| EXP I| C159S|OFTu|10.0| 12| 4.75|
| EXP I| C159S|OFTu|10.0| 24| 6.25|
| EXP I| C159S|OFTu|10.0| 48| 6.5|
| EXP I| C159S|OFTu|10.0| 72| 7.0|
| EXP I| C159S| STU|10.0| 0| 7.0|
| EXP I| C159S| STU|10.0| 6| 4.75|
+-----------+------+----+----+---+-----+
only showing top 20 rows
结果:将与选项 #1 相同,如下所示
import java.net.URL
import org.apache.spark.SparkFiles
val urlfile="https://raw.githubusercontent.com/lrjoshi/webpage/master/public/post/c159s.csv"
spark.sparkContext.addFile(urlfile)
val df = spark.read
.option("inferSchema", true)
.option("header", true)
.csv("file://"+SparkFiles.get("c159s.csv"))
df.show