腐败的镶木地板文件

时间:2018-01-22 12:39:08

标签: hadoop apache-spark hdfs sqoop parquet

我试图用spark来读取镶木地板文件时遇到问题。 prquet文件由sqoop创建:

sqoop import \
  --connect jdbc:teradata://<ip>/Database=<DB> \
  --connection-manager org.apache.sqoop.teradata.TeradataConnManager \
  --username <user> \
  --password <pass> \
  --table OFFERING \
  --target-dir /DWH/OFFERING \
  --as-parquetfile \
  --compress \
  --compression-codec org.apache.hadoop.io.compress.SnappyCodec \
  -m 8 

hdfs目录中的文件看起来是正确的:

[root@omm102 ~]# hdfs dfs -ls /DWH/OFFERING
Found 9 items
-rw-r--r--   3 hdfs hdfs          0 2018-01-19 20:44 /DWH/OFFERING/_SUCCESS
-rw-r--r--   3 hdfs hdfs       3630 2018-01-19 20:44 /DWH/OFFERING/part-m-00000
-rw-r--r--   3 hdfs hdfs       4046 2018-01-19 20:44 /DWH/OFFERING/part-m-00001
-rw-r--r--   3 hdfs hdfs       3146 2018-01-19 20:44 /DWH/OFFERING/part-m-00002
-rw-r--r--   3 hdfs hdfs       3703 2018-01-19 20:44 /DWH/OFFERING/part-m-00003
-rw-r--r--   3 hdfs hdfs       3065 2018-01-19 20:44 /DWH/OFFERING/part-m-00004
-rw-r--r--   3 hdfs hdfs       2972 2018-01-19 20:44 /DWH/OFFERING/part-m-00005
-rw-r--r--   3 hdfs hdfs       3405 2018-01-19 20:44 /DWH/OFFERING/part-m-00006
-rw-r--r--   3 hdfs hdfs       3091 2018-01-19 20:44 /DWH/OFFERING/part-m-00007

我已经用fstck验证了它:

[root@omm101 ~]# hdfs fsck /DWH/OFFERING -files
Connecting to namenode via http://omm101.xxx:<port>/fsck?ugi=root&files=1&path=%2FDWH%2FOFFERING
FSCK started by root (auth:SIMPLE) from /<ip> for path /DWH/OFFERING at Mon Jan 22 15:40:35 GST 2018
/DWH/OFFERING <dir>
/DWH/OFFERING/_SUCCESS 0 bytes, 0 block(s):  OK
/DWH/OFFERING/part-m-00000 3630 bytes, 1 block(s):  OK
/DWH/OFFERING/part-m-00001 4046 bytes, 1 block(s):  OK
/DWH/OFFERING/part-m-00002 3146 bytes, 1 block(s):  OK
/DWH/OFFERING/part-m-00003 3703 bytes, 1 block(s):  OK
/DWH/OFFERING/part-m-00004 3065 bytes, 1 block(s):  OK
/DWH/OFFERING/part-m-00005 2972 bytes, 1 block(s):  OK
/DWH/OFFERING/part-m-00006 3405 bytes, 1 block(s):  OK
/DWH/OFFERING/part-m-00007 3091 bytes, 1 block(s):  OK
Status: HEALTHY
 Total size:    27058 B
 Total dirs:    1
 Total files:   9
 Total symlinks:                0
 Total blocks (validated):      8 (avg. block size 3382 B)
 Minimally replicated blocks:   8 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     3.0
 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)
 Number of data-nodes:          5
 Number of racks:               1
FSCK ended at Mon Jan 22 15:40:35 GST 2018 in 2 milliseconds

但是当我启动spark-shell(2.2.1)并尝试阅读它时:

val offering = spark.read.parquet("/DWH/OFFERING/")

我得到以下错误:

java.io.IOException: Could not read footer for file: FileStatus{path=hdfs://mycluster/DWH/ACCOUNT_PARTY/part-m-00000; isDirectory=false; length=60604112; replication=0; blocksize=0; modification_time=0; access_time=0; owner=; group=; permission=rw-rw-rw-; isSymlink=false}
        at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$readParquetFootersInParallel$1.apply(ParquetFileFormat.scala:506)
        at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$readParquetFootersInParallel$1.apply(ParquetFileFormat.scala:493)
        at scala.collection.parallel.AugmentedIterableIterator$class.flatmap2combiner(RemainsIterator.scala:132)
        at scala.collection.parallel.immutable.ParVector$ParVectorIterator.flatmap2combiner(ParVector.scala:62)
        at scala.collection.parallel.ParIterableLike$FlatMap.leaf(ParIterableLike.scala:1072)
        at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)
        at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
        at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
        at scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)
        at scala.collection.parallel.ParIterableLike$FlatMap.tryLeaf(ParIterableLike.scala:1068)
        at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:152)
        at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)
        at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinTask.doJoin(ForkJoinTask.java:341)
        at scala.concurrent.forkjoin.ForkJoinTask.join(ForkJoinTask.java:673)
        at scala.collection.parallel.ForkJoinTasks$WrappedTask$class.sync(Tasks.scala:378)
        at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:443)
        at scala.collection.parallel.ForkJoinTasks$class.executeAndWaitResult(Tasks.scala:426)
        at scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:56)
        at scala.collection.parallel.ParIterableLike$ResultMapping.leaf(ParIterableLike.scala:958)
        at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)
        at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
        at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
        at scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)
        at scala.collection.parallel.ParIterableLike$ResultMapping.tryLeaf(ParIterableLike.scala:953)
        at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:152)
        at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)
        at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException: hdfs://mycluster/DWH/ACCOUNT_PARTY/part-m-00000 is not a Parquet file. expected magic number at tail [80, 65, 82, 49] but found [48, 52, 53, 10]
        at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:476)
        at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:445)
        at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:421)
        at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$readParquetFootersInParallel$1.apply(ParquetFileFormat.scala:499)
        ... 32 more

我不确定如何前进。

任何帮助都会受到赞赏。

EDIT1:

要验证这是否与压缩机制无关,如下所示,我已在下面检查了sqoop:

sqoop import \
  --connect jdbc:teradata://<ip>/Database=<db> \
  --connection-manager org.apache.sqoop.teradata.TeradataConnManager \
  --username <usr> \
  --password <paswd> \
  --table OFFERING \
  --target-dir /DWH/TST/OFFERING \
  --as-parquetfile \
  -m 8 
然后蚂蚁再次尝试阅读:

val offering = spark.read.parquet("/DWH/TST/OFFERING/")

但是结果是相同的

1 个答案:

答案 0 :(得分:1)

这里的答案非常简单。以下错误是100%真实:

Caused by: java.lang.RuntimeException: hdfs://mycluster/DWH/ACCOUNT_PARTY/part-m-00000 is not a Parquet file.

sqoop生成的文件与镶木地板无关。这些只是CSV文件。尽管我使用了--as-parquetfile标志,但这并不意味着因为Hortonworks Connector for Teradata不支持Parquet。有趣的是,在使用不合适的标志导入期间,sqoop不会创建任何WARN。