java.lang.OutOfMemoryError:带有配置单元的Java堆空间

时间:2015-04-16 11:15:33

标签: java sql hadoop netbeans hive

我使用了hadoop hive 0.9.0和1.1.2以及netbeans, 但我得到了这个错误,我无法解决这个问题 请帮我 代码:

public class Hive_test {

private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver";

   @SuppressWarnings("CallToThreadDumpStack")
public static void main(String[] args) throws SQLException {
    try {
        Class.forName(driverName);
    } catch (ClassNotFoundException e){
        e.printStackTrace();
        System.exit(1);
    }
            System.out.println("commencer la connexion");
    Connection con = DriverManager.getConnection("jdbc:hive://localhost:10000/default",""," ");
    Statement stmt = con.createStatement();
    ResultSet res = stmt.executeQuery("select * from STATE");
    while (res.next()){
        System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(2));
                    System.out.println("sql terminer");
    }
}

以下错误;

error :
commencer la connexion
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:353)
    at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
    at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
    at org.apache.hadoop.hive.service.ThriftHive$Client.recv_execute(ThriftHive.java:116)
    at org.apache.hadoop.hive.service.ThriftHive$Client.execute(ThriftHive.java:103)
    at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:192)
    at org.apache.hadoop.hive.jdbc.HiveStatement.execute(HiveStatement.java:132)
    at org.apache.hadoop.hive.jdbc.HiveConnection.configureConnection(HiveConnection.java:132)
    at org.apache.hadoop.hive.jdbc.HiveConnection.<init>(HiveConnection.java:122)
    at org.apache.hadoop.hive.jdbc.HiveDriver.connect(HiveDriver.java:106)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at hive.Hive_test.main(Hive_test.java:22)

3 个答案:

答案 0 :(得分:19)

您可以在Hive中设置容器堆,并解决此错误:

大多数在Hadoop MapReduce框架之上运行的工具提供了为其作业调整这些Hadoop级别设置的方法。在Hive中有多种方法可以做到这一点。其中三个显示在这里:

1)直接通过Hive命令行传递它:

hive -hiveconf mapreduce.map.memory.mb=4096 -hiveconf mapreduce.reduce.memory.mb=5120 -e "select count(*) from test_table;"

2)在调用Hive之前设置ENV变量:

export HIVE_OPTS="-hiveconf mapreduce.map.memory.mb=4096 -hiveconf mapreduce.reduce.memory.mb=5120"

3)在hive CLI中使用“set”命令。

hive> set mapreduce.map.memory.mb=4096;
hive> set mapreduce.reduce.memory.mb=5120;
hive> select count(*) from test_table;

答案 1 :(得分:1)

以我为例,我还需要在java.opts中设置内存

set mapreduce.map.memory.mb=4096;
set mapreduce.map.java.opts=-Xmx3686m;
set mapreduce.reduce.memory.mb=4096;
set mapreduce.reduce.java.opts=-Xmx3686m;

答案 2 :(得分:0)

对我来说,以下解决方案有效。
在启动配置单元CLI之前,请先使用export HADOOP_CLIENT_OPTS=" -Xmx8192m",然后启动cli