在Parquet错误中转换CSV

时间:2017-08-11 08:22:35

标签: python csv apache-spark pyspark

我的python脚本有问题。我的脚本应该将我的csv转换为镶木地板文件。但是当我执行我的脚本时,我有这个错误:

py4j.protocol.Py4JJavaError: An error occurred while calling o59.csv. : java.io.IOException: No FileSystem for scheme: null

o59.csv是什么?它不是我当前的文件...

这是我的剧本

from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql import SparkSession
from pyspark.sql.types import *

spark = SparkSession.builder \
    .appName ("Convert into Parquet") \
    .config("spark.some.config.option", "some-value") \
    .getOrCreate()


schema = StructType([
    StructField("date", DateType(),True),
    StructField("semaine",IntegerType(),True),
    StructField("annee",IntegerType(),True),
    StructField("mois",IntegerType(),True)])

# read csv
 df = spark.read.csv('//home/sshuser3/calendrier.csv', header=True)

# Displays the content of the DataFrame to stdout
 df  = sqlContext.createDataFrame(rdd,schema)


df.write.parquet('//home/sshuser3/outputParquet/calendrier.parquet')

你对我有什么建议吗?

整个信息是:

spark-submit Convert.py
SPARK_MAJOR_VERSION is set to 2, using Spark2
17/08/11 15:01:58 INFO SparkContext: Running Spark version 2.1.0.2.6.0.10-29
17/08/11 15:01:59 INFO SecurityManager: Changing view acls to: sshuser3
17/08/11 15:01:59 INFO SecurityManager: Changing modify acls to: sshuser3
17/08/11 15:01:59 INFO SecurityManager: Changing view acls groups to: 
17/08/11 15:01:59 INFO SecurityManager: Changing modify acls groups to: 
17/08/11 15:01:59 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(sshuser3); groups with view permissions: Set(); users  with modify permissions: Set(sshuser3); groups with modify permissions: Set()
17/08/11 15:01:59 INFO Utils: Successfully started service 'sparkDriver' on port 44872.
17/08/11 15:01:59 INFO SparkEnv: Registering MapOutputTracker
17/08/11 15:01:59 INFO SparkEnv: Registering BlockManagerMaster
17/08/11 15:01:59 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/08/11 15:01:59 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/08/11 15:01:59 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-cf215ce8-b19e-4270-9c62-57a37dea703b
17/08/11 15:01:59 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
17/08/11 15:01:59 INFO SparkEnv: Registering OutputCommitCoordinator
17/08/11 15:01:59 INFO log: Logging initialized @2107ms
17/08/11 15:01:59 INFO Server: jetty-9.2.z-SNAPSHOT
17/08/11 15:01:59 INFO Server: Started @2176ms
17/08/11 15:01:59 INFO ServerConnector: Started ServerConnector@617e9f14{HTTP/1.1}{0.0.0.0:4040}
17/08/11 15:01:59 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@31c59a{/jobs,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@498e3b10{/jobs/json,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7836dfef{/jobs/job,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2d7e5980{/jobs/job/json,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@6c63d029{/stages,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@732e5b44{/stages/json,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2bfe44d0{/stages/stage,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@6a3c33dc{/stages/stage/json,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@4707f607{/stages/pool,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@16843d33{/stages/pool/json,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@6e341ff2{/storage,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2ec88f8e{/storage/json,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2d5cca61{/storage/rdd,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@4c3ab7d9{/storage/rdd/json,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@289b1fcb{/environment,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@30201b75{/environment/json,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5d07431a{/executors,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2b18c427{/executors/json,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@72eda80f{/executors/threadDump,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@1891056d{/executors/threadDump/json,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3bbbcd13{/static,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@71f8537e{/,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@72382d83{/api,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2f2da5a4{/jobs/job/kill,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@27e6b3da{/stages/stage/kill,null,AVAILABLE,@Spark}
17/08/11 15:01:59 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.0.18:4040
17/08/11 15:02:00 INFO RequestHedgingRMFailoverProxyProvider: Looking for the active RM in [rm1, rm2]...
17/08/11 15:02:00 INFO RequestHedgingRMFailoverProxyProvider: Found active RM [rm2]
17/08/11 15:02:00 INFO Client: Requesting a new application from cluster with 2 NodeManagers
17/08/11 15:02:00 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (25600 MB per container)
17/08/11 15:02:00 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
17/08/11 15:02:00 INFO Client: Setting up container launch context for our AM
17/08/11 15:02:00 INFO Client: Setting up the launch environment for our AM container
17/08/11 15:02:00 INFO Client: Preparing resources for our AM container
17/08/11 15:02:02 INFO Client: Uploading resource file:/usr/hdp/current/spark2-client/python/lib/pyspark.zip -> adl://home/user/sshuser3/.sparkStaging/application_1501619838490_0047/pyspark.zip
17/08/11 15:02:03 INFO Client: Uploading resource file:/usr/hdp/current/spark2-client/python/lib/py4j-0.10.4-src.zip -> adl://home/user/sshuser3/.sparkStaging/application_1501619838490_0047/py4j-0.10.4-src.zip
17/08/11 15:02:03 INFO Client: Uploading resource file:/tmp/spark-813e8548-f7f4-4ff7-80f4-236e0e3b5ee2/__spark_conf__630655753844577263.zip -> adl://home/user/sshuser3/.sparkStaging/application_1501619838490_0047/__spark_conf__.zip
17/08/11 15:02:03 INFO SecurityManager: Changing view acls to: sshuser3
17/08/11 15:02:03 INFO SecurityManager: Changing modify acls to: sshuser3
17/08/11 15:02:03 INFO SecurityManager: Changing view acls groups to: 
17/08/11 15:02:03 INFO SecurityManager: Changing modify acls groups to: 
17/08/11 15:02:03 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(sshuser3); groups with view permissions: Set(); users  with modify permissions: Set(sshuser3); groups with modify permissions: Set()
17/08/11 15:02:03 INFO Client: Submitting application application_1501619838490_0047 to ResourceManager
17/08/11 15:02:03 INFO YarnClientImpl: Submitted application application_1501619838490_0047
17/08/11 15:02:03 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1501619838490_0047 and attemptId None
17/08/11 15:02:04 INFO Client: Application report for application_1501619838490_0047 (state: ACCEPTED)
17/08/11 15:02:04 INFO Client: 
     client token: N/A
     diagnostics: AM container is launched, waiting for AM container to Register with RM
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1502463723834
     final status: UNDEFINED
     tracking URL: http://hn1-pzhdla.stikrkashgqefoqg3xnv2chu0d.fx.internal.cloudapp.net:8088/proxy/application_1501619838490_0047/
     user: sshuser3
17/08/11 15:02:05 INFO Client: Application report for application_1501619838490_0047 (state: ACCEPTED)
17/08/11 15:02:06 INFO Client: Application report for application_1501619838490_0047 (state: ACCEPTED)
17/08/11 15:02:07 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
17/08/11 15:02:07 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> hn0-pzhdla.stikrkashgqefoqg3xnv2chu0d.fx.internal.cloudapp.net,hn1-pzhdla.stikrkashgqefoqg3xnv2chu0d.fx.internal.cloudapp.net, PROXY_URI_BASES -> http://hn0-pzhdla.stikrkashgqefoqg3xnv2chu0d.fx.internal.cloudapp.net:8088/proxy/application_1501619838490_0047,http://hn1-pzhdla.stikrkashgqefoqg3xnv2chu0d.fx.internal.cloudapp.net:8088/proxy/application_1501619838490_0047), /proxy/application_1501619838490_0047
17/08/11 15:02:07 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
17/08/11 15:02:07 INFO Client: Application report for application_1501619838490_0047 (state: RUNNING)
17/08/11 15:02:07 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: 10.0.0.7
     ApplicationMaster RPC port: 0
     queue: default
     start time: 1502463723834
     final status: UNDEFINED
     tracking URL: http://hn1-pzhdla.stikrkashgqefoqg3xnv2chu0d.fx.internal.cloudapp.net:8088/proxy/application_1501619838490_0047/
     user: sshuser3
17/08/11 15:02:07 INFO YarnClientSchedulerBackend: Application application_1501619838490_0047 has started running.
17/08/11 15:02:07 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37070.
17/08/11 15:02:07 INFO NettyBlockTransferService: Server created on 10.0.0.18:37070
17/08/11 15:02:07 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/08/11 15:02:07 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.0.0.18, 37070, None)
17/08/11 15:02:07 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.0.18:37070 with 366.3 MB RAM, BlockManagerId(driver, 10.0.0.18, 37070, None)
17/08/11 15:02:07 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.0.0.18, 37070, None)
17/08/11 15:02:07 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.0.0.18, 37070, None)
17/08/11 15:02:08 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@1651e3d3{/metrics/json,null,AVAILABLE,@Spark}
17/08/11 15:02:08 INFO EventLoggingListener: Logging events to adl:///hdp/spark2-events/application_1501619838490_0047
17/08/11 15:02:10 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.0.0.7:51484) with ID 1
17/08/11 15:02:10 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.0.7:40909 with 3.0 GB RAM, BlockManagerId(1, 10.0.0.7, 40909, None)
17/08/11 15:02:12 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.0.0.4:42318) with ID 2
17/08/11 15:02:12 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.0.4:46697 with 3.0 GB RAM, BlockManagerId(2, 10.0.0.4, 46697, None)
17/08/11 15:02:12 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
17/08/11 15:02:12 INFO SharedState: Warehouse path is 'file:/home/sshuser3/spark-warehouse/'.
17/08/11 15:02:12 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@66612a38{/SQL,null,AVAILABLE,@Spark}
17/08/11 15:02:12 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5b8829c6{/SQL/json,null,AVAILABLE,@Spark}
17/08/11 15:02:12 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@367f0e38{/SQL/execution,null,AVAILABLE,@Spark}
17/08/11 15:02:12 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@4f95bd8d{/SQL/execution/json,null,AVAILABLE,@Spark}
17/08/11 15:02:12 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@468f4cc5{/static/sql,null,AVAILABLE,@Spark}
17/08/11 15:02:12 WARN DataSource: Error while looking for metadata directory.
Traceback (most recent call last):
  File "/home/sshuser3/Convert.py", line 22, in <module>
    df = spark.read.csv('//home/sshuser3/calendrier.csv',header = True)
  File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 380, in csv
  File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
  File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o59.csv.
: java.io.IOException: No FileSystem for scheme: null
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2786)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2829)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2811)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:372)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:370)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
    at scala.collection.immutable.List.flatMap(List.scala:344)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:415)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)

之后,他们更多的是日志,但并不重要

FaigB表示问题可能是调用createDataFrame函数,但我认为问题出现在此函数之前。这是日志,我们看到它紧跟在read.csv:

之后
Traceback (most recent call last):
  File "/home/sshuser3/Convert.py", line 21, in <module>
    df = spark.read.csv('//home/sshuser3/calendrier.csv', header=True)
  File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 380, in csv
  File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
  File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o59.csv.
: java.io.IOException: No FileSystem for scheme: null
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2786)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2829)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2811)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:372)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:370)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
    at scala.collection.immutable.List.flatMap(List.scala:344)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:415)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)

2 个答案:

答案 0 :(得分:0)

我建议仔细检查您的代码段

# Displays the content of the DataFrame to stdout
 df  = sqlContext.createDataFrame(rdd,schema)

看起来您应首先将数据帧转换为RDD,然后使用

将其映射到构造的架构中
rdd = df.rdd

我做了一个小实验。

//read csv file
df = spark.read.csv('/<path_to_csv>', header=True)

//casting types for specific columns because loaded data is string at it has unicode prefix
df = df.select(df.<column_name>.cast('timestamp'),df.<column_name>.cast('int'),df.<column_name>.cast('int'),df.<column_name>.cast('int'))

//creating dataframe using schema
dt = spark.createDataFrame(df.rdd,schema)

//write  as parquet
dt.write.parquet('/path_to_parquet_file')

答案 1 :(得分:0)

经过一些研究,我发现了我的问题。我的路径写得不正确。我必须使用file:/ my path而不是//我的路径。

所以这个帖子可以关闭。谢谢你的答案。