Tuning index_granularity for ClickHouse Performance


Understanding index granularity is crucial for troubleshooting performance issues in ClickHouse, especially when it comes to optimizing how ClickHouse uses indexes for query execution.

Granularity in the context of indexes in ClickHouse refers to how much data is grouped under a single index entry. Here’s a deeper look into how index granularity impacts performance and how it can be fine-tuned for better query efficiency.

Index Granularity in ClickHouse

  1. What is Index Granularity?
    • Index granularity in ClickHouse determines the number of rows in each data block that an index entry points to. For skip indexes, such as minmax, bloom filter, or set indexes, the granularity setting defines how many rows are covered by each index mark.
  2. Why is it Important?
    • The chosen granularity impacts how effectively the index can narrow down the data that needs to be scanned for a query. Finer granularity means more precise filtering, potentially reducing the amount of data to be read. However, it also means a larger index size, which can impact memory usage and potentially slow down index creation and maintenance.

Troubleshooting Performance with Granularity

  1. Full Table Scans Despite Indexes
    • If queries are resulting in full table scans despite having indexes, it might be due to too coarse a granularity setting. In such cases, the index might not be efficient enough in filtering out irrelevant data blocks.
  2. Balancing Granularity and Performance
    • A lower granularity setting (covering fewer rows per index mark) can lead to more effective data skipping, but at the cost of increased index size and potentially slower insert performance.
    • Conversely, higher granularity reduces the index size and can speed up insert operations but may lead to less effective filtering and more data scanning during queries.
  3. Adjusting Granularity
    • Granularity can be adjusted when creating a skip index. Carefully choose the granularity value based on your data and query patterns.
    • Consider experimenting with different granularity settings to find the optimal balance for your specific use case.
  4. Monitoring Impact
    • Monitor the impact of granularity changes on query performance. Use ClickHouse’s EXPLAIN statement to see how much data is being read by queries and whether the index is effectively reducing the dataset size.
  5. Consider Data Characteristics
    • The effectiveness of a particular granularity setting can also depend on the characteristics of your data. For instance, data with high cardinality or uneven distribution might benefit from different granularity settings compared to more uniformly distributed data.


Index granularity is a key factor in the performance of indexes in ClickHouse. It requires careful tuning to balance the size of the index with the effectiveness of data filtering during query execution. Understanding your dataset and regularly monitoring query performance are essential steps in optimizing index granularity for better overall performance in ClickHouse.

To learn more on ClickHouse Index Granularity & Troubleshooting, please consider reading the following articles

About Shiv Iyer 216 Articles
Open Source Database Systems Engineer with a deep understanding of Optimizer Internals, Performance Engineering, Scalability and Data SRE. Shiv currently is the Founder, Investor, Board Member and CEO of multiple Database Systems Infrastructure Operations companies in the Transaction Processing Computing and ColumnStores ecosystem. He is also a frequent speaker in open source software conferences globally.