Clickhouse 1.1.54343分布式ReplicatedMergeTree表中的数据提取错误

时间:2018-03-10 14:30:49

标签: clickhouse

我在Clickhouse 1.1.54343中的数据加载和表的合并方面遇到问题,并且无法在Clickhouse中插入任何数据。

我们有3个节点集群,我们在数据摄取中向表中添加300列,并从JSON文件中提取数据。

我们能够在表格中保存数据

创建表格

*-- Each Node*
CREATE TABLE IF NOT EXISTS AudiencePlanner.reached_pod
(
    date Date,
    p_id String,
    language String, 
    city String, 
    state String, 
    platform String, 
    manufacturer String,
    model String, 
    content_id String
) ENGINE = ReplicatedMergeTree('/clickhouse/tables/shard1/reached_pod', 'clickhouse1')
PARTITION BY date 
ORDER BY (date, platform, language, city, state, manufacturer, model, content_id, p_id);

CREATE TABLE IF NOT EXISTS AudiencePlanner2.reached_pod
(
    date Date,
    p_id String,
    language String, 
    city String, 
    state String, 
    platform String, 
    manufacturer String,
    model String, 
    content_id String
) ENGINE = ReplicatedMergeTree('/clickhouse/tables/shard3/reached_pod', 'clickhouse1')
PARTITION BY date 
ORDER BY (date, platform, language, city, state, manufacturer, model, content_id, p_id);

---所有节点

CREATE TABLE AudiencePlanner.reached_pod_all AS AudiencePlanner.reached_pod ENGINE = Distributed(test, '', reached_pod, rand());

的Config.xml

<remote_servers>
        <test>
            <shard>
                <weight>1</weight>
                <internal_replication>false</internal_replication>
                <replica>
                    <host>ip1</host>
                    <port>9000</port>
                    <default_database>AudiencePlanner</default_database>
                    <user>test</user>
                    <password>testpass</password>
                </replica>
                <replica>
                    <host>ip2</host>
                    <port>9000</port>
                    <default_database>AudiencePlanner2</default_database>
                    <user>test</user>
                    <password>testpass</password>
                </replica>
            </shard>
            <shard>
                <weight>1</weight>
                <internal_replication>false</internal_replication>
                <replica>
                    <host>ip2</host>
                    <port>9000</port>
                    <default_database>AudiencePlanner</default_database>
                    <user>test</user>
                    <password>testpass</password>
                </replica>
                <replica>
                    <host>ip3</host>
                    <port>9000</port>
                    <default_database>AudiencePlanner2</default_database>
                    <user>test</user>
                    <password>testpass</password>
                </replica>
            </shard>
            <shard>
                <weight>1</weight>
                <internal_replication>false</internal_replication>
                <replica>
                    <host>ip3</host>
                    <port>9000</port>
                    <default_database>AudiencePlanner</default_database>
                    <user>test</user>
                    <password>testpass</password>
                </replica>
                <replica>
                    <host>ip1</host>
                    <port>9000</port>
                    <default_database>AudiencePlanner2</default_database>
                    <user>test</user>
                    <password>testpass</password>
                </replica>
            </shard>
        </dms>
    </remote_servers>

错误日志

2018.03.10 07:50:59.990953 [ 31 ] <Trace> AudiencePlanner.reached_pod (StorageReplicatedMergeTree): Executing log entry to merge parts 20180203_111_111_0, 20180203_112_112_0, 20180203_113_113_0 to 20180203_111_113_1
2018.03.10 07:50:59.991204 [ 31 ] <Debug> AudiencePlanner.reached_pod (Merger): Merging 3 parts: from 20180203_111_111_0 to 20180203_113_113_0 into tmp_merge_20180203_111_113_1
2018.03.10 07:50:59.996659 [ 31 ] <Debug> AudiencePlanner.reached_pod (Merger): Selected MergeAlgorithm: Horizontal
2018.03.10 07:50:59.997347 [ 31 ] <Trace> MergeTreeBlockInputStream: Reading 1 ranges from part 20180203_111_111_0, approx. 24576 rows starting from 0
2018.03.10 07:50:59.997417 [ 31 ] <Trace> MergeTreeBlockInputStream: Reading 1 ranges from part 20180203_112_112_0, approx. 8192 rows starting from 0
2018.03.10 07:50:59.997476 [ 31 ] <Trace> MergeTreeBlockInputStream: Reading 1 ranges from part 20180203_113_113_0, approx. 8192 rows starting from 0
2018.03.10 07:51:00.016479 [ 31 ] <Debug> MemoryTracker: Peak memory usage: 1.16 GiB.
2018.03.10 07:51:00.044547 [ 31 ] <Error> DB::StorageReplicatedMergeTree::queueTask()::<lambda(DB::StorageReplicatedMergeTree::LogEntryPtr&)>: Code: 76, e.displayText() = 

DB::Exception: Cannot open file /data/clickhouse//data/AudiencePlanner/reached_pod/tmp_merge_20180203_111_113_1/%1F%EF%BF%BD%08%00%00%00%00%00%00%00%EF%BF%BDVrs%EF%BF%BDws%EF%BF%BDu%EF%BF%BDq%EF%BF%BD%0F%09%EF%BF%BD76%EF%BF%BD51P%EF%BF%BDQ%0A%0A%EF%BF%BDw%EF%BF%BDu%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BDw%042%0D%EF%BF%BD%0D%0C%0D%2D%EF%BF%BD%EF%BF%BD%08%EF%BF%BD%C6%A6%EF%BF%BD%26%26J%EF%BF%BD%00%EF%BF%BD%1C%EF%BF%BD%1E%3F%00%00%00.bin, errno: 36, strerror: File name too long, e.what() = DB::Exception, Stack trace:

0. /usr/bin/clickhouse-server(StackTrace::StackTrace()+0x15) [0x7317e35]
1. /usr/bin/clickhouse-server(DB::Exception::Exception(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int)+0x1e) [0x19caa8e]
2. /usr/bin/clickhouse-server(DB::throwFromErrno(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int)+0x1a4) [0x72ffea4]
3. /usr/bin/clickhouse-server(DB::WriteBufferFromFile::WriteBufferFromFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, int, unsigned int, char*, unsigned long)+0x1c5) [0x733d4a5]
4. /usr/bin/clickhouse-server(DB::createWriteBufferFromFileBase(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, unsigned long, unsigned long, int, unsigned int, char*, unsigned long)+0xac) [0x7345c9c]
5. /usr/bin/clickhouse-server(DB::IMergedBlockOutputStream::ColumnStream::ColumnStream(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, DB::CompressionSettings, unsigned long, unsigned long)+0x

I tried detaching the partition but still not able insert any other data, after detaching the error comes like 
2018.03.10 09:00:27.299291 [ 9 ] <Error> reached_pod_all.Distributed.DirectoryMonitor: Code: 76, e.displayText() = 

DB::Exception: Received from ip1:9000. DB::Exception: Cannot open file /data/clickhouse//data/AudiencePlanner/reached_pod/tmp_insert_20180207_21_21_0/%1F%EF%BF%BD%08%00%00%00%00%00%00%00%EF%BF%BDVrs%EF%BF%BDws%EF%BF%BDu%EF%BF%BDq%EF%BF%BD%0F%09%EF%BF%BD76%EF%BF%BD51P%EF%BF%BDQ%0A%0A%EF%BF%BDw%EF%BF%BDu%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BDw%042%0D%EF%BF%BD%0D%0C%0D%2D%EF%BF%BD%EF%BF%BD%08%EF%BF%BD%C6%A6%EF%BF%BD%26%26J%EF%BF%BD%00%EF%BF%BD%1C%EF%BF%BD%1E%3F%00%00%00.bin, errno: 36, strerror: File name too long. Stack trace:

0. /usr/bin/clickhouse-server(StackTrace::StackTrace()+0x15) [0x7317e35]
1. /usr/bin/clickhouse-server(DB::Exception::Exception(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int)+0x1e) [0x19caa8e]
2. /usr/bin/clickhouse-server(DB::throwFromErrno(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, int)+0x1a4) [0x72ffea4]
3. /usr/bin/clickhouse-server(DB::WriteBufferFromFile::WriteBufferFromFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, int, unsigned int, char*, unsigned long)+0x1c5) [0x733d4a5]
4. /usr/bin/clickhouse-server(DB::createWriteBufferFromFileBase(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, unsigned long, unsigned long, int, unsigned int, char*, unsigned long)+0xac) [0x7345c9c]
5. /usr/bin/clickhouse-server(DB::IMergedBlockOutputStream::ColumnStream::ColumnStream(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, DB::CompressionSettings, unsigned long, unsigned long)+0xcc) [0x68cd7ac]
6. /usr/bin/clickhouse-server() [0x68ce674]

请帮助我确定并解决问题。

1 个答案:

答案 0 :(得分:0)

我找到了解决问题的方法,这是因为我们在1000列左右动态添加了列数。

由于列数,文件名创建了长字符串,在文件名中写入了错误。

questions2.push