When running databases in production, you do not, as a general rule, run only one server. A minimum of three is most common, and it goes up from there. Two of the primary reasons we do this are performance and resiliency. In this series of articles, we'll talk about the current clustering design of FeatureBase as well as the next iteration we're working on.
Each database (Postgres, MySQL, MongoDB, Cassandra, etc) has a slightly different set of features when it comes to clusters. There's some commonalities we can see in each; let's take a look at some of them!
At its simplest, this is a copy of the data on another machine. This can be used to provide a few things.
Behold my amazing art skills! Which is totally not aided by LucidChart!
In this example, all reads/writes from users go to one database, and in the background, all the changes are replicated to another database. The result is that we have a complete second copy of the database that lags behind the primary database by a small amount of time (hopefully on the order of milliseconds, or seconds at worst).
One of the nice things about this is that if the primary database fails (HDD fails, network dies, whatever), we can redirect all the traffic to the cold stand-by, and it becomes the new primary. In the best case, this will seem like a brief latency spike to connected users. In the worst case, existing connections are dropped and the client has to re-establish a connection to resume querying.
Looking at that diagram, you might think to yourself:
"Self, we have an entire second database that isn't doing anything! Let's put it to work!"
And you can!
We don't want to redirect writes to the stand-by, because then they would get out of sync. But what if we redirect reads? Those don't change the data (usually, heh), right?
So what if we did:
We can still do failover with this! If / when the primary node fails, we just redirect the write traffic to the read replica. If the read replica fails, we redirect the reads back to the primary database.
There's an important consideration here. When you split traffic like this, CPU usage on each node will go down; it can be tempting to reduce the CPU / memory of the nodes to save costs. And it would save you costs!
One of the nodes goes down and we redirect traffic, and the CPU requirements go up. The node hits 100% utilization and collapses, and now both nodes are down.
Make sure you test your failover; another common error is to not adjust your server sizes as traffic gradually grows. Both servers may have been sized correctly a month ago, but if you see an increase in traffic of 20% over that month, and then one server fails, the one server may no longer be able to handle the combined load.
In the next part, we'll talk about multi-producer scenarios. If you have any feedback or corrections on anything in this article, please send an e-mail to email@example.com.
docker run -p 10101:10101 featurebasedb/featurebase
git clone https://github.com/FeatureBaseDB/featurebase-examples.git
docker network create fbnet
docker compose up