将数据插入到具有超过100k分区的hive分区表中的问题

时间:2017-08-30 13:12:47

标签: hadoop hive hiveql hadoop-partitioning

我创建了一个包含2000万条记录的临时表,只有两个字段viewerid和seenid。从那我我试图创建一个动态分区ORC表与“viewerid”列,但地图工作没有完成,如附图中所示

mapred-site.xml中

<configuration>
<property>
  <name> mapreduce.framework.name</name>
  <value>yarn</value>
</property>
<property>
  <name>mapreduce.jobhistory.address</name>
  <value>localhost:10020</value>
</property>
<property>
  <name>mapreduce.jobhistory.webapp.address</name>
  <value>localhost:19888</value>
</property>
<property>
  <name>mapreduce.map.memory.mb</name>
  <value>4096</value>
</property>
<property>
  <name>mapreduce.reduce.memory.mb</name>
  <value>8192</value>
</property>
<property>
  <name>mapreduce.map.java.opts</name>
  <value>-Xmx3072m</value>
</property>
<property>
  <name>mapreduce.reduce.java.opts</name>
  <value>-Xmx6144m</value>
</property>

<property>
  <name>mapred.tasktracker.map.tasks.maximum</name>
  <value>4</value>
</property>
<property>
  <name>mapred.tasktracker.reduce.tasks.maximum</name>
  <value>4</value>
</property>


**yarn-site.xml**

 <property>
 <name>yarn.nodemanager.aux-services</name>
 <value>mapreduce_shuffle</value>
</property>
<property>
 <name>yarn.resourcemanager.scheduler.address</name>
 <value>hadoop-master:8030</value>
</property>
<property>
 <name>yarn.resourcemanager.address</name>
 <value>hadoop-master:8032</value>
</property>
<property>
 <name>yarn.resourcemanager.webapp.address</name>
 <value>hadoop-master:8088</value>
</property>
<property>
 <name>yarn.resourcemanager.resource-tracker.address</name>
 <value>hadoop-master:8031</value>
</property>

工作状态:

enter image description here

我的舞台表:

hive> desc formatted bmviews;
OK
# col_name              data_type               comment             

viewerid                int                                         
viewedid                int                                         

# Detailed Table Information         
Database:               bm                       
Owner:                  sudheer                  
CreateTime:             Tue Aug 29 18:22:34 IST 2017     
LastAccessTime:         UNKNOWN                  
Retention:              0                        
Location:               hdfs://hadoop-master:54311/user/hive/warehouse/bm.db/bmviews     
Table Type:             MANAGED_TABLE            
Table Parameters:        
    numFiles                9                   
    numRows                 0                   
    rawDataSize             0                   
    totalSize               539543256           
    transient_lastDdlTime   1504070146          

# Storage Information        
SerDe Library:          org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe   
InputFormat:            org.apache.hadoop.mapred.TextInputFormat     
OutputFormat:           org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat  
my partition table description:

enter image description here

我已将每个节点的分区更改为200k但仍面临此问题。我有两个数据节点(8g,6g)ram和namenode有16gb内存。

如何将数据插入分区表?

0 个答案:

没有答案