这可能是一个非常愚蠢的问题,但我很难找到一种方法将BIGSQL table
内的数据复制到本地文件系统中的.txt
。
答案 0 :(得分:1)
根据生成的数据文件的大小,您可以使用 export 命令将数据转换为一个文本文件。 生成的文件最终将在一个节点上。
我使用以下脚本作为示例:
\connect bigsql
drop table if exists stack.issue2;
create hadoop table if not exists stack.issue2 (
f1 integer,
f2 integer,
f3 varchar(200),
f4 integer
)
stored as parquetfile;
insert into stack.issue2 (f1,f2,f3,f4) values (0,0,'Detroit',0);
insert into stack.issue2 (f1,f2,f3,f4) values (1,1,'Mt. Pleasant',1);
insert into stack.issue2 (f1,f2,f3,f4) values (2,2,'Marysville',2);
insert into stack.issue2 (f1,f2,f3,f4) values (3,3,'St. Clair',3);
insert into stack.issue2 (f1,f2,f3,f4) values (4,4,'Port Huron',4);
select * from stack.issue2;
{ call sysproc.admin_cmd('export to /tmp/t1.unl of del select * from stack.issue2') };
\quit
运行脚本:
jsqsh --autoconnect --input-file=./t1.sql --output-file=t1.out
收率:
cat t1.out
+----+----+--------------+----+
| F1 | F2 | F3 | F4 |
+----+----+--------------+----+
| 0 | 0 | Detroit | 0 |
| 2 | 2 | Marysville | 2 |
| 3 | 3 | St. Clair | 3 |
| 1 | 1 | Mt. Pleasant | 1 |
| 4 | 4 | Port Huron | 4 |
+----+----+--------------+----+
+---------------+---------------+-------------+
| ROWS_EXPORTED | MSG_RETRIEVAL | MSG_REMOVAL |
+---------------+---------------+-------------+
| 5 | [NULL] | [NULL] |
+---------------+---------------+-------------+
和导出的文件:
ls -la /tmp/t1.unl
-rw-r--r-- 1 bigsql hadoop 93 Mar 3 16:05 /tmp/t1.unl
cat /tmp/t1.unl
0,0,"Detroit",0
3,3,"St. Clair",3
2,2,"Marysville",2
1,1,"Mt. Pleasant",1
4,4,"Port Huron",4
答案 1 :(得分:0)
bigsql的美丽就是你可以像你做普通的db2数据库一样连接并调用export。
[bigsql@myhost ~]$ db2 "create hadoop table test1 ( i int, i2 int , i3 int)"
DB20000I The SQL command completed successfully.
[bigsql@myhost ~]$ db2 "insert into test1 values (1,2,3), (4,5,6),(7,8,9),(0,1,2)"
DB20000I The SQL command completed successfully.
[bigsql@myhost ~]$ db2 "export to output.del of del select * from test1"
SQL3104N The Export utility is beginning to export data to file "output.del".
SQL3105N The Export utility has finished exporting "4" rows.
Number of rows exported: 4
[bigsql@myhost ~]$ cat output.del
1,2,3
4,5,6
7,8,9
0,1,2
答案 2 :(得分:0)
另一种通过SQL提取的方法(在本例中为csv)如下:
create hadoop table csv_tableName
row format delimited fields terminated by ','
location '/tmp/csv_tableName'
as select * from tableName
然后您可以从HDFS获取文件。