我的Cassandra的版本信息:
[cqlsh 5.0.1 | Cassandra 2.2.5 | CQL规范3.3.1 |原生协议v4]
我正在尝试将一些巨大的表(几百万行)移动到另一个键空间。除了" COPY到csv,以及来自csv"的COPY,还有更好的解决方案吗?
答案 0 :(得分:3)
好的,我设法让这个在运行2.2.8的单节点集群上运行。
我通过将holidays
表从我的presentation
键空间移到我的stackoverflow
键空间来进行实验。
以下是我采取的步骤:
在新密钥空间内创建表。
此步骤很重要,因为每个表都有一个唯一标识符的UUID,存储在system.schema_columnfamilies
列的cf_id
表中。此ID附加到保存数据的目录名称。通过将模式从一个密钥空间复制/粘贴到另一个密钥空间,您将确保使用相同的列名,但会生成新的唯一标识符。
注意:在3.x中,标识符存储在system_schema.tables
表中。
aploetz@cqlsh:stackoverflow> CREATE TABLE holidays (
type text,
eventtime timestamp,
beginend text,
name text,
PRIMARY KEY (type, eventtime, beginend)
) WITH CLUSTERING ORDER BY (eventtime DESC, beginend DESC);
aploetz@cqlsh:stackoverflow> SELECT * FROM stackoverflow.holidays ;
type | eventtime | beginend | name
------+-----------+----------+------
(0 rows)
正确停止您的节点(DISABLEGOSSIP,DRAIN,kill / stop等等。)
现在,在磁盘上找到旧表和新表的位置,然后将文件复制/移动到新位置(从旧位置):
$ ls -al /var/lib/cassandra/data22/stackoverflow/holidays-77a767e0a5f111e6a2bebd9d201c4c8f/
total 12
drwxrwxr-x 3 aploetz aploetz 4096 Nov 8 14:25 .
drwxrwxr-x 17 aploetz aploetz 4096 Nov 8 14:25 ..
drwxrwxr-x 2 aploetz aploetz 4096 Nov 8 14:25 backups
$ cp /var/lib/cassandra/data22/presentation/holidays-74bcfde0139011e6a67c2575e6398503/la* /var/lib/cassandra/data22/stackoverflow/holidays-77a767e0a5f111e6a2bebd9d201c4c8f/
$ ls -al /var/lib/cassandra/data22/stackoverflow/holidays-77a767e0a5f111e6a2bebd9d201c4c8f/
drwxrwxr-x 3 aploetz aploetz 4096 Nov 8 14:26 .
drwxrwxr-x 17 aploetz aploetz 4096 Nov 8 14:25 ..
drwxrwxr-x 2 aploetz aploetz 4096 Nov 8 14:25 backups
-rw-rw-r-- 1 aploetz aploetz 43 Nov 8 14:26 la-1-big-CompressionInfo.db
-rw-rw-r-- 1 aploetz aploetz 628 Nov 8 14:26 la-1-big-Data.db
-rw-rw-r-- 1 aploetz aploetz 9 Nov 8 14:26 la-1-big-Digest.adler32
-rw-rw-r-- 1 aploetz aploetz 16 Nov 8 14:26 la-1-big-Filter.db
-rw-rw-r-- 1 aploetz aploetz 57 Nov 8 14:26 la-1-big-Index.db
-rw-rw-r-- 1 aploetz aploetz 4468 Nov 8 14:26 la-1-big-Statistics.db
-rw-rw-r-- 1 aploetz aploetz 94 Nov 8 14:26 la-1-big-Summary.db
-rw-rw-r-- 1 aploetz aploetz 94 Nov 8 14:26 la-1-big-TOC.txt
-rw-rw-r-- 1 aploetz aploetz 43 Nov 8 14:26 la-2-big-CompressionInfo.db
-rw-rw-r-- 1 aploetz aploetz 164 Nov 8 14:26 la-2-big-Data.db
-rw-rw-r-- 1 aploetz aploetz 10 Nov 8 14:26 la-2-big-Digest.adler32
-rw-rw-r-- 1 aploetz aploetz 16 Nov 8 14:26 la-2-big-Filter.db
-rw-rw-r-- 1 aploetz aploetz 26 Nov 8 14:26 la-2-big-Index.db
-rw-rw-r-- 1 aploetz aploetz 4460 Nov 8 14:26 la-2-big-Statistics.db
-rw-rw-r-- 1 aploetz aploetz 108 Nov 8 14:26 la-2-big-Summary.db
-rw-rw-r-- 1 aploetz aploetz 94 Nov 8 14:26 la-2-big-TOC.txt
重新启动您的节点。
通过cqlsh查询:
Connected to SnakesAndArrows at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.2.8 | CQL spec 3.3.1 | Native protocol v4]
Use HELP for help.
aploetz@cqlsh> SELECT * FROM stackoverflow.holidays ;
type | eventtime | beginend | name
--------------+--------------------------+----------+------------------------
Religious | 2016-12-26 05:59:59+0000 | E | Christmas
Religious | 2016-12-25 06:00:00+0000 | B | Christmas
Religious | 2016-03-28 04:59:59+0000 | E | Easter
Religious | 2016-03-27 05:00:00+0000 | B | Easter
presentation | 2016-05-06 20:40:08+0000 | B | my presentation
presentation | 2016-05-06 20:40:03+0000 | B | my presentation
presentation | 2016-05-06 20:39:15+0000 | B | my presentation
presentation | 2016-05-06 20:38:10+0000 | B | my presentation
US | 2016-07-05 04:59:59+0000 | E | 4th of July
US | 2016-07-04 05:00:00+0000 | B | 4th of July
US | 2016-05-09 04:59:59+0000 | E | Mothers Day
US | 2016-05-08 05:00:00+0000 | B | Mothers Day
Nerd | 2016-12-22 05:59:59+0000 | E | 2112 Day
Nerd | 2016-12-21 06:00:00+0000 | B | 2112 Day
Nerd | 2016-09-26 04:59:59+0000 | E | Hobbit Day
Nerd | 2016-09-25 05:00:00+0000 | B | Hobbit Day
Nerd | 2016-09-20 04:59:59+0000 | E | Talk Like a Pirate Day
Nerd | 2016-09-19 05:00:00+0000 | B | Talk Like a Pirate Day
Nerd | 2016-05-07 04:59:59+0000 | E | Star Wars Week
Nerd | 2016-05-04 05:00:00+0000 | B | Star Wars Week
Nerd | 2016-03-14 05:00:00+0000 | E | Pi Day
Nerd | 2016-03-14 05:00:00+0000 | B | Pi Day
(22 rows)
此方法的问题在于,您需要停止群集,并在每个节点上移动文件。而cqlsh COPY
允许您在单个节点上导入和导出,而群集仍在运行。
我知道COPY
具有这种声誉,将其限制为较小的数据集。但2.2.x有一些选项可以帮助限制COPY,以防止它在大型数据集上超时。我最近得到了它导出/导入3.7亿行没有超时。