In a ClickHouse replication cluster, adding a new node is a common operation to scale your cluster and improve fault tolerance. This knowledge base article provides a step-by-step guide on how to add a new node (in this case, clickhouse03
) to an existing ClickHouse replication cluster named prod_cluster
using ClickHouse Keeper for cluster management. Below is the existing cluster configuration:
clickhouse01 :) select cluster,replica_num,host_name,host_address,port,is_local from clusters SELECT cluster, replica_num, host_name, host_address, port, is_local FROM clusters Query id: 4ca6f391-c896-49bf-a902-4cd8fa67f619 ┌─cluster───────────────┬─replica_num─┬─host_name──────────────────────────────────────┬─host_address─┬─port─┬─is_local─┐ │ prod_cluster │ 1 │ clickhouse01-prod.apac.xyz │ 10.70.121.54 │ 9000 │ 1 │ │ prod_cluster │ 2 │ clickhouse02-prod.apac.xyz │ 10.70.121.61 │ 9000 │ 0 │ └───────────────────────┴─────────────┴────────────────────────────────────────────────┴──────────────┴──────┴──────────┘
Follow these steps to add a new node to the ClickHouse replication cluster:
1. Edit config.xml
on the New Node
Edit the config.xml
file on the new ClickHouse node (clickhouse03
) to set its display name:
vim /etc/clickhouse-server/config.d/config.xml <display_name>clickhouse03</display_name>
2. Create macros.xml
on the New Node
Create a macros.xml
file on the new ClickHouse node (clickhouse03
) to define macros related to the cluster:
vim /etc/clickhouse-server/config.d/macros.xml <clickhouse> <macros> <shard>01</shard> <replica>03</replica> <cluster>prod_cluster</cluster> </macros> </clickhouse>
3. Create remote-servers.xml
on the New Node
Edit the remote-servers.xml
file on the new ClickHouse node (clickhouse03
) to define the cluster’s remote servers and replication settings:
/etc/clickhouse-server/config.d/remote-servers.xml </clickhouse> <remote_servers replace="true"> <prod_cluster> <secret>secreat/secret> <shard> <internal_replication>true</internal_replication> <replica> <host>clickhouse01-prod.xyz</host> <port>9000</port> </replica> <replica> <host>clickhouse02-prod.xyz</host> <port>9000</port> </replica> <replica> <host>clickhouse03-prod.xyz</host> <port>9000</port> </replica> </shard> </prod_cluster> </remote_servers> </clickhouse>
4. Update Existing Nodes
For the existing nodes (clickhouse01
and clickhouse02
) in the cluster, update their remote-servers.xml
configuration to include the new node (clickhouse03
).
vim /etc/clickhouse-server/config.d/remote-servers.xml ###cadd below config <replica> <host>clickhouse03-prod.xyz</host> <port>9000</port> </replica>
Step 5: Configure use-keeper.xml
Edit the use-keeper.xml
file ClickHouse node (clickhouse03
) with ClickHouse keeper details.
/etc/clickhouse-server/config.d/use-keeper.xml <clickhouse> <zookeeper> <node> <host>clickhouse-keeper1-prod.xyz</host> <port>9181</port> </node> <node> <host>clickhouse-keeper2-prod.xyz</host> <port>9181</port> </node> <node> <host>clickhouse-keeper3-prod.xyz</host> <port>9181</port> </node> </zookeeper> </clickhouse>
Step 6: Restart ClickHouse Service
Restart the ClickHouse service on all nodes in the cluster one by one.
systemctl restart clickhouse-server.service clickhouse-client -u default --password='*****'
Step 7: Verify the Cluster Status
Check the cluster status to ensure the new node has been successfully added:
clickhouse01 :) SELECT cluster, replica_num, host_name, host_address, port, is_local FROM clusters Query id: 3afb660c-a38b-444c-8066-5a0d638db9ef ┌─cluster───────────────┬─replica_num─┬─host_name──────────────────────────────────────┬─host_address─┬─port─┬─is_local─┐ │ prod_cluster │ 1 │ clickhouse01-prod.xyz │ 10.70.121.54 │ 9000 │ 0 │ │ prod_cluster │ 2 │ clickhouse02-prod.xyz │ 10.70.121.61 │ 9000 │ 0 │ │ prod_cluster │ 3 │ clickhouse03-prod.xyz │ 10.70.121.71 │ 9000 │ 1 │ └───────────────────────┴─────────────┴────────────────────────────────────────────────┴──────────────┴──────┴──────────┘
The output should show all nodes in the cluster, including the new node, with their respective replica numbers and statuses.
Done! We have successfully added a new node to ClickHouse Replication Cluster with ClickHouse Keeper. Our cluster is now more resilient and capable of handling increased data loads.