我在hive中创建了一个表
create table HiveMB
(EmployeeID Int,FirstName String,Designation String,Salary Int,Department String)
clustered by (Department) into 3 buckets
stored as orc TBLPROPERTIES ('transactional'='true') ;
我的文件格式就像
1,Anne,Admin,50000,A
2,Gokul,Admin,50000,B
3,Janet,Sales,60000,A
4,Hari,Admin,50000,C
5,Sanker,Admin,50000,C
并且数据分为三个部门。
当我检查仓库时,有3个桶
Found 3 items
-rwxr-xr-x 3 aibladmin hadoop 252330 2014-11-28 14:46 /user/hive/warehouse/hivemb/delta_0000012_0000012/bucket_00000
-rwxr-xr-x 3 aibladmin hadoop 100421 2014-11-28 14:45 /user/hive/warehouse/hivemb/delta_0000012_0000012/bucket_00001
-rwxr-xr-x 3 aibladmin hadoop 313047 2014-11-28 14:46 /user/hive/warehouse/hivemb/delta_0000012_0000012/bucket_00002
我如何能够检索1个这样的桶。
当我做-cat
时,它不是人类可读的格式。
显示类似
`J�lj�(��rwNj��[��Y���gR�� \�B�Q_Js)�6 �st�A�6�ixt� R �
ޜ�KT� e����IL Iԋ� ł2�2���I�Y��FC8 /2�g� ����� > ������q�D � b�` `�`���89$ $$ ����I��y|@
%\���� �&�ɢ`a~ � S �$�l�:y���K $�$����X�X��)Ě���U*��
6. �� �cJnf� KHjr�ć����� ��(p` ��˻_1s �5ps1: 1:I4L\��u
如何才能看到存储在每个存储桶中的数据?
我的文件是csv格式,而不是ORC,所以我做了this
的解决方法但我无法查看存储桶中的数据。这不是人类可读的格式。
答案 0 :(得分:1)
我正在上传orc屏幕截图,这是从这个hive查询产生的:
create table stackOverFlow
(EmployeeID Int,FirstName String,Designation String,Salary Int,Department String)
row format delimited
fields terminated by ',';
load data local inpath '/home/ravi/stack_file.txt'
overwrite into table stackOverFlow;
和
create table stackOverFlow6
(EmployeeID Int,FirstName String,Designation String,Salary Int,Department String)
clustered by (Department) into 3 buckets
row format delimited
fields terminated by ','
stored as orc tblproperties ("orc.compress"="ZLIB");
insert overwrite table stackOverFlow6 select * from stackOverFlow;
为上述配置单元查询生成ORC结果文件:
答案 1 :(得分:0)
create table HiveMB1
(EmployeeID Int,FirstName String,Designation String,Salary Int,Department String)
row format delimited
fields terminated by ',';
load data local inpath '/home/user17/Data/hive.txt'
overwrite into table HiveMB1;
create table HiveMB2
(EmployeeID Int,FirstName String,Designation String,Salary Int,Department String)
clustered by (Department) into 3 buckets
row format delimited
fields terminated by ',';
insert overwrite table HiveMB2 select * from HiveMB1 ;
user17@BG17:~$ hadoop dfs -ls /user/hive/warehouse/hivemb2
Found 3 items
-rw-r--r-- 1 user17 supergroup 22 2014-12-01 15:52 /user/hive/warehouse/hivemb2/000000_0
-rw-r--r-- 1 user17 supergroup 44 2014-12-01 15:53 /user/hive/warehouse/hivemb2/000001_0
-rw-r--r-- 1 user17 supergroup 43 2014-12-01 15:53 /user/hive/warehouse/hivemb2/000002_0
user17@BG17:~$ hadoop dfs -cat /user/hive/warehouse/hivemb2/000000_0
2,Gokul,Admin,50000,B
user17@BG17:~$ hadoop dfs -cat /user/hive/warehouse/hivemb2/000001_0
4,Hari,Admin,50000,C
5,Sanker,Admin,50000,C
user17@BG17:~$ hadoop dfs -cat /user/hive/warehouse/hivemb2/000002_0
1,Anne,Admin,50000,A
3,Janet,Sales,60000,A
答案 2 :(得分:0)
你的桌子:
> create table HiveMB
(EmployeeID Int,FirstName String,Designation String,Salary Int,Department String)
clustered by (Department) into 3 buckets
stored as orc TBLPROPERTIES ('transactional'='true') ;
您被选为ORC
格式的表,这意味着它压缩实际数据并存储压缩数据。
答案 3 :(得分:0)
您可以通过以下命令查看存储桶的orc格式:
hive --orcfiledump [path-to-the-bucket]