RDBMS Archival for Leaner & Faster OLTP Operations with ChistaDATA Fabric


It is a little-known fact that ChistaDATA Inc. is a spinoff from MinervaDB Inc. MinervaDB is one of the leading providers of enterprise-class consultative support and managed services for open source transactional databases such as PostgreSQL, MySQL, MariaDB, MyRocks, etc. It has served hundreds of enterprises over the last 10 years, such as Fox Media, Paypal, Nike, National Geographic, GE, Caterpillar, Unilever, etc. in making RDBMS operations lean, fast, secure, and fault-tolerant.

In this wide-ranging experience with the RDBMS stack, we noticed an interesting pattern in the performance issues that come up with PostgreSQL and MySQL at scale.

Problem statement

1. RDBMS isn’t built for scale

Neither of these databases have been built for web scale as we know it today. Transactional data is inherently small (usually TBs or tens of TBs, versus the PBs that characterize analytical datasets). However, as the scale of data within the RDBMS cluster approaches the scale of 100s of TBs (which is becoming commonplace with the growing pace of data generation), the performance of the RDBMS slows down significantly (and often suddenly) due to bloat. At these sizes even basic transactional queries take tens of seconds (if not minutes) severely increasing the cost per query, jeopardizing ACID database operations, and in some cases even affecting business continuity. 

The typical solution to this conundrum in most enterprises is partitioning & horizontal scaling, i.e., throwing the kitchen sink (more resources) at the problem. This is an extremely cost-intensive approach that addresses the symptom but not the root cause of the problem. Note that RDBMS tend to have very low data compression factor (due to their row-based storage paradigm with heterogeneous data types stored together) which increases storage costs manifold as the number of shards and replicas grow in a system. It also further increases cost per query as a tradeoff to better performance in terms of latency & throughput, and unfortunately the benefits get squandered as data scale grows further. In other words this is only a temporary fix, not a permanent resolution.

2. RDBMS performs poorly for analytical reads

It is fairly interesting to note that a number of enterprises use the RDBMS for a portion (or more) of their analytical read queries even today. The RDBMS is simply not architected for analytical reads of most kinds, which typically seek a small number of columns from a wide table dataset. As a row-based store, the RDBMS extracts data in all the columns for the selected rows which loads a lot of data irrelevant to the query from disk to RAM that severely slows down query execution.

The execution of analytical read queries in an RDBMS, especially “bad” or resource-intensive SQL not only consumes critical resources that are absolutely essential for timely execution of transactional WRITE queries, but can also bring down the cluster causing unexpected downtime that again affects business continuity.

Potential solutions

In our experience, there are 2 ways to solve this problem:

1. Dataset-specific database tuning & configuration

Which is precisely what we do at MinervaDB, for instance. We tune MySQL, PostgreSQL, MariaDB, et al right from the Linux kernels to the various configuration parameters, to SQL engineering (index & query optimization) to efficient horizontal scale-out & capacity planning, etc. to make sure the database runs smoothly at any scale. We’ve supported hundreds of terabytes of data within RDBMS clusters for many of our customers. You can read our blogs about tuning PostgreSQL, MySQL, and MariaDB internals here

2. Leaner & performant RDBMS by leveraging column store archival

This solution stems from our observation that historical data (however an enterprise may define it, with any time-based cutoff) in an RDBMS is most-often used only by analytical read queries. Therefore its presence in an RDBMS only serves to cause bloat and adversely impact WRITE query performance by interspersing with analytical reads. 

A solution to this would be to remove analytical data from an RDBMS entirely and archive it into a OLAP DBMS that is ideally suited for cost-efficient storage and analytical read queries. This is precisely what we propose in this post.

Archival in ColumnStore for leaner & faster RDBMS operations

Our RDBMS archival strategy leverages the following technologies:

1/ ClickHouse, a truly columnar store designed bottoms up for extreme data compaction, high ingestion rates, and blazing fast analytical read query execution.

2/ ChistaDATA Fabric, a veritable “database gateway” above the database layer in the form of a lightweight & secure reverse proxy server with capabilities such as read-write splitting, multi-DB wire protocol awareness, SQL blacklisting, load balancing, query logging, etc.

And the architecture diagram is as follows:

Fig 1: ChistaDATA Fabric and ClickHouse for RDBMS Archival

The process (and associated benefits) of RDBMS Archival with ChistaDATA Fabric and ClickHouse are as follows:

01: Historical datasets archived & compressed in ClickHouse

Historical datasets that are highly infrequently accessed in transactions and mostly relevant for analytical queries are identified for archival in ClickHouse, and the archival can be commenced with ChistaDATA’s oltp-archiver service. It is designed for continuous data syncing from an OLTP database using CDC (change data capture), it currently supports MySQL and PostgreSQL through Debezium-Kafka for ingress and ClickHouse as egress. It also has the ability for automatic table creation and column mapping, and is modular in design with extensible ingress & egress connectors.

Why archive to ClickHouse? As a truly columnar store which supports tens of compression algorithms (with a unique compression algorithm applicable to each column in a table), ClickHouse enables extreme data compression. As this ClickBench Benchmark indicates, ClickHouse can compress data 5-10x more efficiently with its truly columnar design than OLTP DBMS such as MySQL, PostgreSQL, MongoDB, MariaDB, etc.

Fig 2: ClickBench comparison of compaction factor of ClickHouse and RDBMS

This unlocks 2 benefits:

  • Cost of storage in ClickHouse is a small fraction of the RDBMS.
  • RDBMS, bereft of historical data, is much leaner, inexpensive, and more performance for its core capability: transactions.

Do note that this also means the data in ClickHouse is no longer ACID-compliant, which is the key tradeoff to archival in a column store that must be kept in mind. Hence, we always recommend the archival of data that is no longer relevant to transactional workloads and is relatively infrequently accessed in such use cases.

02: Fabric splits analytical reads to ClickHouse

Fabric is multi-database wire protocol aware and can easily connect to ClickHouse, MySQL, PostgreSQL, and a growing list of such databases. With its read-write splitting capability, ChistaDATA Fabric can identify all incoming queries which are analytical reads and direct them towards ClickHouse. As a truly columnar store with extreme compression, primary & secondary indexing, partitioning, vectorized query execution, massively parallel processing, materialized views, specialized storage engines, ClickHouse is the world’s fastest OLAP DBMS designed ground up for analytical query processing. Read more about why ClickHouse is so fast in the linked article.

This ClickBench Benchmark indicates that ClickHouse can be many orders of magnitude (100x – 10,000x) faster than an OLTP DBMS such as MySQL, PostgreSQL, MariaDB, MongoDB, etc. for analytical reads.

Fig 3: ClickBench comparison hot run query performance of ClickHouse and RDBMS

This unlocks 2 benefits:

  • Query latency/throughput and cost per read query is super-low in ClickHouse.
  • RDBMS no longer serves analytical read queries and its resources are preserved for precisely what it does best: transactions.

03: Fabric splits transactional writes to RDBMS

Self-evident. All transactional queries such as WRITEs are split to RDBMS. With a leaner cluster bereft of historical data, and with resources used almost exclusively for WRITEs, RDBMS query throughput & performance is much higher than before.

04: Fabric’s unified database monitoring & observability plane

Fabric also enables a unified plane of monitoring, observability & control. Metrics from RDBMS and ClickHouse, as well as query logs aggregated in the proxy servers are collected in ClickHouse and visualized in Grafana, enabling a single pane of glass to view the performance of the system as a whole as well as at a per-query level. This dashboard can be customized to a company’s preferences.

Fig 5: Sample custom observability dashboard enabled by ChistaDATA Fabric

Additionally, the proxy also enables SQL blacklisting which prevents the execution  of both resource-intensive SQL as well as malicious SQL at the proxy itself before it reaches the database servers. Additionally, database credentials never need to be shared externally since the Fabric acts as a secure HTTP endpoint with encrypted database communications. This, along with its load balancing capabilities helps improve both the security and resilience of the database system as a whole.


In this article, we’ve explored a key use case of ChistaDATA Fabric: RDBMS Archival in ClickHouse to achieve a leaner and faster RDBMS instance. We have implemented this functionality in ChistaDATA Cloud and is available for anyone to try to improve the performance of their RDBMS in real-time.

Fig 5: OLTP archival service live on ChistaDATA Cloud

With Fabric and ClickHouse, ChistaDATA is committed to help you make your database stack leaner, faster, secure & fault-tolerant. And we do this wherever your data is housed: whether on-premise, in VPC or in the public cloud. Reach out to us to help you implement Fabric to make your database investment more successful, and stay tuned to explore more use cases of the ChistaDATA Fabric.