解决Hive中的“请求状态不佳...”错误

时间:2019-08-23 19:42:55

标签: hive hiveql

设置

我正在尝试使用Hue在AWS EMR的Hive中运行查询。我正在运行一个m4.xlarge主服务器,2个核心和25个任务(所有m4.xlarge)。

如果我运行以下代码,则可以在0.002秒内得到预期的结果:

Select * 
From table  
Limit 100  

但是,当我要返回一组特定的数据时,会出现错误。修改后的选择是:

Select *  
From table 
Where var1 = 'abcd' 
Limit 100 

错误

查询即将结束后,我将收到以下错误:

Bad status for request TFetchResultsReq(fetchType=0, operationHandle=TOperationHandle(hasResultSet=True, modifiedRowCount=None, operationType=0, 
operationId=THandleIdentifier(secret='\xa2a\x96\x82b\xf6L\x87\xa9\xdd\x0f\x92\x1e\x95\xf2\xc2', guid='\x07a\x06\x03\xfc\x9cC&\xb1\x90\xf1U(2\x81\xa4')), orientation=4, maxRows=100): 
TFetchResultsResp(status=TStatus(errorCode=0, errorMessage='java.io.IOException: java.io.EOFException: Premature EOF from inputStream', sqlState=None, infoMessages=
['*org.apache.hive.service.cli.HiveSQLException:java.io.IOException: java.io.EOFException: Premature EOF from inputStream:25:24',
 'org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:508', 
'org.apache.hive.service.cli.operation.OperationManager:getOperationNextRowSet:OperationManager.java:307', 
'org.apache.hive.service.cli.session.HiveSessionImpl:fetchResults:HiveSessionImpl.java:878', 'sun.reflect.GeneratedMethodAccessor21:invoke::-1', 
'sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43', 'java.lang.reflect.Method:invoke:Method.java:498',
 'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:78', 'org.apache.hive.service.cli.session.HiveSessionProxy:access$000:HiveSessionProxy.java:36', 
'org.apache.hive.service.cli.session.HiveSessionProxy$1:run:HiveSessionProxy.java:63', 'java.security.AccessController:doPrivileged:AccessController.java:-2', 
'javax.security.auth.Subject:doAs:Subject.java:422', 'org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1844', 
'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:59', 'com.sun.proxy.$Proxy41:fetchResults::-1', 
'org.apache.hive.service.cli.CLIService:fetchResults:CLIService.java:559', 'org.apache.hive.service.cli.thrift.ThriftCLIService:FetchResults:ThriftCLIService.java:751',
 'org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1717', 
'org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1702', 'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39',
 'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39', 'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56', 
'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:286', 'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1149', 
'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:624', 'java.lang.Thread:run:Thread.java:748', '*java.io.IOException:java.io.EOFException: Premature EOF from 
inputStream:29:4', 'org.apache.hadoop.hive.ql.exec.FetchOperator:getNextRow:FetchOperator.java:521', 'org.apache.hadoop.hive.ql.exec.FetchOperator:pushRow:FetchOperator.java:428', 
'org.apache.hadoop.hive.ql.exec.FetchTask:fetch:FetchTask.java:147', 'org.apache.hadoop.hive.ql.Driver:getResults:Driver.java:2208', 
'org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:503', '*java.io.EOFException:Premature EOF from inputStream:41:12',
 'com.hadoop.compression.lzo.LzopInputStream:readFully:LzopInputStream.java:75', 'com.hadoop.compression.lzo.LzopInputStream:readHeader:LzopInputStream.java:114',
 'com.hadoop.compression.lzo.LzopInputStream:<init>:LzopInputStream.java:55', 'com.hadoop.compression.lzo.LzopCodec:createInputStream:LzopCodec.java:106',
 'org.apache.hadoop.io.SequenceFile$Reader:init:SequenceFile.java:2017', 'org.apache.hadoop.io.SequenceFile$Reader:initialize:SequenceFile.java:1902',
 'org.apache.hadoop.io.SequenceFile$Reader:<init>:SequenceFile.java:1851', 'org.apache.hadoop.io.SequenceFile$Reader:<init>:SequenceFile.java:1865',
 'org.apache.hadoop.mapred.SequenceFileRecordReader:<init>:SequenceFileRecordReader.java:49', 
'org.apache.hadoop.mapred.SequenceFileInputFormat:getRecordReader:SequenceFileInputFormat.java:64', 
'org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit:getRecordReader:FetchOperator.java:695', 
'org.apache.hadoop.hive.ql.exec.FetchOperator:getRecordReader:FetchOperator.java:333', 'org.apache.hadoop.hive.ql.exec.FetchOperator:getNextRow:FetchOperator.java:459'],
 statusCode=3), results=None, hasMoreRows=None)

所做的修改

我在Google上花费了一些时间来解决这个问题。许多地方(https://mapr.com/community/s/question/0D50L00006BItYpSAL/hue-error-bad-status-for-request-tfetchresultsreq)都说解决方案将use_get_log_api设置更改为true。但我已对此进行了更改,但仍然出现此错误。

我还发现了更改空闲超时的建议,因为这可能会引起问题。因此,我也在Hive中设置了以下设置:

set use_get_log_api=true 
set hive.server2.idle.operation.timeout = 0
set hive.server2.idle.session.timeout = 0 
set hive.server2.session.check.interval = 600000
set tez.session.am.dag.submit.timeout.secs = 10000
set hive.server2.parallel.ops.in.session=true

我还提取了查询的详细日志,以尝试在那里查找问题。但是我看到以下行向我表明实际上一切都正确完成了:

|impl.VertexImpl|: Task Completion: vertex_1566582919520_0002_3_00 [Map 1], tasks=406, failed=0, killed=0, success=406, completed=406, commits=0, err=null

和...

|app.DAGAppMaster|: DAG completed, dagId=dag_1566582919520_0002_3, dagState=SUCCEEDED

然后出现……

2019-08-23 19:17:19,504 [INFO] [DelayedContainerManager] |rm.YarnTaskSchedulerService|: No taskRequests. Container's idle timeout delay expired or is new. Releasing container, containerId=container_1566582919520_0002_01_000124, containerExpiryTime=1566587839259, idleTimeout=5000, taskRequestsCount=0, heldContainers=9, delayedContainers=8, isNew=false
2019-08-23 19:17:19,504 [INFO] [Dispatcher thread {Central}] |HistoryEventHandler.criticalEvents|: [HISTORY][DAG:dag_1566582919520_0002_3][Event:CONTAINER_STOPPED]: containerId=container_1566582919520_0002_01_000124, stoppedTime=1566587839504, exitStatus=0
2019-08-23 19:17:19,504 [INFO] [ContainerLauncher #20] |launcher.TezContainerLauncherImpl|: Stopping container_1566582919520_0002_01_000124
2019-08-23 19:17:19,663 [INFO] [Dispatcher thread {Central}] |container.AMContainerImpl|: Container container_1566582919520_0002_01_000133 exited with diagnostics set to Container failed, exitCode=-105. Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

对于其他容器也是如此。

当我尝试在其他表上运行其他类似代码时,我将非常感谢您提供的帮助和深入了解为什么会发生这种情况。因此,它不仅限于此表上的此查询。

我确实尝试过浏览网络并更改设置,但是如果有什么我没有尝试过或者应该让我知道!

谢谢!

0 个答案:

没有答案