We are excited to publish the preliminary results of a benchmarking study on NoSQL databases we are conducting.
For this particular study we measured how much load we could put through the databases while keeping most of the working set in RAM. The reason for this test design was to measure the upper bounds on how an application might perform when the load is demanding but there is tolerance to losing a small window of recent updates under total node failures*. These kinds of requirements are very common for things like pre-warmed caches, distributed web session stores, user profile matching, and real-time bidding.
Couchbase, with its roots in memcached, is far and away the front runner for this test, with Cassandra and MongoDB having load levels on par with one another (see report preview for more on test design).
It is always part of our process to invite vendors to provide configuration suggestions prior to testing and to share our methodology and preliminary results with each of them before we write conclusions. We will add any updates here should there be any before the final report is released. (We will also be posting more about the benefits and drawbacks of producing benchmark studies over the next few days, please check back!)
Enough preamble – Here are the preliminary results:
*Couchbase sponsored this study. Sponsorship in this case means that the vendor offered to offset a portion of the benchmarking costs if we were to do a study measuring performance across popular NoSQL databases while keeping most of the working set in RAM. We believed this to be a useful and interesting aspect to study for ourselves and the community. It does not mean Couchbase participated in the creation of the actual test design, nor in the testing, nor in compiling these preliminary results (although they are pretty excited about them 😉 ).