clickhouse
分布式表 Sharded Cluster(分片集群)
分布式表 Replicated Cluster(副本集群)
本站点使用 MrDoc 构建
-
+
分布式表 Replicated Cluster(副本集群)
### 介绍 单个 shard,多个 replica 单节点的情况下 查询速度太慢就升级为集群分散查询 #### docker compose ``` 主节点 services: my-keeper-tron-main: image: clickhouse/clickhouse-server:25.4 container_name: my-keeper-tron-main hostname: keeper1 environment: KEEPER_ID: 1 volumes: - ./keeper_single.xml:/etc/clickhouse-keeper/keeper_config.xml:ro - ./keeper-data:/var/lib/clickhouse-keeper - ./keeper-data:/var/lib/clickhouse - ./keeper-logs:/var/log/clickhouse-keeper command: clickhouse-keeper --config-file /etc/clickhouse-keeper/keeper_config.xml networks: - tron_main_network my-ch-tron-main: image: clickhouse/clickhouse-server:25.4 container_name: my-ch-tron-main hostname: clickhouse1 restart: always depends_on: - my-keeper-tron-main ports: - 9003:9000 - 8123:8123 environment: REPLICA_ID: node1 CLICKHOUSE_PASSWORD: 123456 volumes: - ./zookeeper.xml:/etc/clickhouse-server/conf.d/zookeeper.xml:ro - ./ch-data:/var/lib/clickhouse - ./ch-logs:/var/log/clickhouse-server networks: - tron_main_network networks: tron_main_network: external: true ``` ``` 节点2 services: my-ch-tron-main: image: clickhouse/clickhouse-server:25.4 container_name: my-ch-tron-main hostname: clickhouse2 restart: always ports: - 9003:9000 - 8123:8123 environment: REPLICA_ID: node2 CLICKHOUSE_PASSWORD: 123456 volumes: - ./zookeeper.xml:/etc/clickhouse-server/conf.d/zookeeper.xml:ro - ./ch-data:/var/lib/clickhouse - ./ch-logs:/var/log/clickhouse-server networks: - tron_main_network deploy: resources: limits: cpus: '8' memory: 20G networks: tron_main_network: external: true ``` ``` 节点3 services: my-ch-tron-main: image: clickhouse/clickhouse-server:25.4 container_name: my-ch-tron-main hostname: clickhouse3 restart: always ports: - 9003:9000 - 8123:8123 environment: REPLICA_ID: node3 CLICKHOUSE_PASSWORD: 123456 volumes: - ./zookeeper.xml:/etc/clickhouse-server/conf.d/zookeeper.xml:ro - ./ch-data:/var/lib/clickhouse - ./ch-logs:/var/log/clickhouse-server networks: - tron_main_network networks: tron_main_network: external: true ``` #### keeper_single.xml ``` <clickhouse> <listen_host>0.0.0.0</listen_host> <keeper_server> <tcp_port>9181</tcp_port> <server_id replace="1" from_env="KEEPER_ID">1</server_id> <log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path> <snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path> <enable_reconfiguration>true</enable_reconfiguration> <coordination_settings> <operation_timeout_ms>10000</operation_timeout_ms> <session_timeout_ms>30000</session_timeout_ms> <raft_logs_level>trace</raft_logs_level> </coordination_settings> <raft_configuration> <server> <id>1</id> <hostname>keeper1</hostname> <port>9234</port> </server> </raft_configuration> </keeper_server> </clickhouse> ``` #### zookeeper.xml ``` <clickhouse> <zookeeper> <node><host>keeper1</host><port>9181</port></node> </zookeeper> <allow_experimental_cluster_discovery>1</allow_experimental_cluster_discovery> <remote_servers> <my_cluster> <shard> <internal_replication>true</internal_replication> <replica><host>clickhouse1</host><port>9000</port></replica> <replica><host>clickhouse2</host><port>9000</port></replica> <replica><host>clickhouse3</host><port>9000</port></replica> </shard> </my_cluster> </remote_servers> <macros> <cluster>my_cluster</cluster> <replica from_env="REPLICA_ID"/> </macros> </clickhouse> ``` #### 创建表 ``` CREATE TABLE tron ON CLUSTER 'my_cluster' ( txType UInt8, fromOrTo UInt8, blockNum UInt32, blockIndex UInt32, contract String, address String, address2 String, value Decimal128(0), txIndex UInt32, note String, txTime DateTime('Asia/Shanghai'), txid FixedString(64), txState Int8, balance Decimal128(0) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{cluster}/tron', '{replica}') PARTITION BY intDiv(blockNum, 1000000) ORDER BY (address, blockNum, txType, fromOrTo, contract); ```
潘孝钦
2026年2月3日 16:20
转发文档
收藏文档
上一篇
下一篇
手机扫码
复制链接
手机扫一扫转发分享
复制链接
Markdown文件
PDF文档(打印)
分享
链接
类型
密码
更新密码