Python Spark查询Hive只返回Schema

时间:2018-04-19 09:06:18

标签: python apache-spark hadoop hive pyspark

当我从Hive中选择数据时,它返回一个数据帧但我无法访问除模式之外的任何内容。

from spark import HiveContext, SQLContext

hive_context = HiveContext(sc)
hive_context.sql("USE myDatabase")
data = hive_context.sql("SELECT * FROM myTable")
data.show()

当我检查"数据"的类型时它返回:

<class 'pyspark.sql.dataframe.DataFrame'>

如果我尝试显示数据帧,则会返回引用dataframe.py的错误,但是&#34; data.printSchema()&#34;工作并显示正确的数据类型。

18/04/20 10:12:50 INFO MemoryStore: Block broadcast_6 stored as values in memory (estimated size 793.1 KB, free 509.5 MB)
18/04/20 10:12:50 INFO BlockManagerInfo: Removed broadcast_2_piece0 on localhost:37574 in memory (size: 1198.0 B, free: 511.1 MB)
18/04/20 10:12:50 INFO MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 56.5 KB, free 509.5 MB)
18/04/20 10:12:50 INFO ContextCleaner: Cleaned accumulator 2
18/04/20 10:12:50 INFO ContextCleaner: Cleaned accumulator 3
18/04/20 10:12:50 INFO BlockManagerInfo: Added broadcast_6_piece0 in memory on localhost:37574 (size: 56.5 KB, free: 511.0 MB)
18/04/20 10:12:50 INFO BlockManagerInfo: Removed broadcast_4_piece0 on localhost:37574 in memory (size: 1198.0 B, free: 511.0 MB)
18/04/20 10:12:50 INFO SparkContext: Created broadcast 6 from showString at NativeMethodAccessorImpl.java:-2
18/04/20 10:12:50 INFO ContextCleaner: Cleaned accumulator 4
18/04/20 10:12:50 INFO ContextCleaner: Cleaned accumulator 6
18/04/20 10:12:50 INFO BlockManagerInfo: Removed broadcast_1_piece0 on localhost:37574 in memory (size: 1198.0 B, free: 511.0 MB)
18/04/20 10:12:50 INFO BlockManagerInfo: Removed broadcast_3_piece0 on localhost:37574 in memory (size: 1198.0 B, free: 511.0 MB)
18/04/20 10:12:50 INFO BlockManagerInfo: Removed broadcast_0_piece0 on localhost:37574 in memory (size: 1197.0 B, free: 511.0 MB)
18/04/20 10:12:50 INFO ContextCleaner: Cleaned accumulator 5
18/04/20 10:12:50 INFO PerfLogger: <PERFLOG method=OrcGetSplits from=org.apache.hadoop.hive.ql.io.orc.ReaderImpl>
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/hdp/current/spark-client/python/pyspark/sql/dataframe.py", line 257, in show
    print(self._jdf.showString(n, truncate))
  File "/usr/hdp/current/spark-client/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
  File "/usr/hdp/current/spark-client/python/pyspark/sql/utils.py", line 45, in deco
    return f(*a, **kw)
  File "/usr/hdp/current/spark-client/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o134.showString.
: java.lang.RuntimeException: serious problem
        at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1021)
        at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1048)
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
        at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:190)
        at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
        at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
        at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1500)
        at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1500)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
        at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2087)
        at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1499)
        at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1506)
        at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1376)
        at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
        at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2100)
        at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1375)
        at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1457)
        at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
        at py4j.Gateway.invoke(Gateway.java:259)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:209)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.NumberFormatException: For input string: "0006244_0000"
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:192)
        at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:998)
        ... 47 more
Caused by: java.lang.NumberFormatException: For input string: "0006244_0000"
        at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
        at java.lang.Long.parseLong(Long.java:589)
        at java.lang.Long.parseLong(Long.java:631)
        at org.apache.hadoop.hive.ql.io.AcidUtils.parseDelta(AcidUtils.java:310)
        at org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:379)
        at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:634)
        at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:620)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        ... 1 more

当我通过Aginity检查表时,它有超过1000行

2 个答案:

答案 0 :(得分:0)

这对我有用(在pyspark shell中):

from pyspark import HiveContext, SQLContext
hive_context = HiveContext(sc)
hive_context.sql("USE myDatabase")
data = hive_context.sql("SELECT * FROM myTable")
data.show()

答案 1 :(得分:0)

根据日志的问题在下面 -

引起:java.util.concurrent.ExecutionException:java.lang.NumberFormatException:对于输入字符串:“0006244_0000”

当您尝试解析某些不是数字字符串的输入时,会发生

java.lang.NumberFormatException

我可以在“0006244_0000”

中看到值中的'_'不是整数

您可能希望在Hive表中查看数据。