Change-Id: Ib11e95942b27e131496b0804591f1d5b64cb9272
The repository was moved out from Github due to illegal discriminatory restrictions for Russian Crimea and for sovereign crimeans.
libmdbx
Revised and extended descendant of Symas LMDB.
Project Status for now
-
The stable versions (stable/0.0 and stable/0.1 branches) of MDBX are frozen, i.e. no new features or API changes, but only bug fixes.
-
The next version (devel branch) is under active non-public development, i.e. current API and set of features are extreme volatile.
-
The immediate goal of development is formation of the stable API and the stable internal database format, which allows realise all PLANNED FEATURES:
- Integrity check by Merkle tree;
- Support for raw block devices;
- Separate place (HDD) for large data items;
- Using "Roaring bitmaps" inside garbage collector;
- Non-sequential reclaiming, like PostgreSQL's Vacuum;
- Asynchronous lazy data flushing to disk(s);
- etc...
Don't miss libmdbx for other runtimes.
Runtime | GitHub | Author |
---|---|---|
JVM | mdbxjni | Castor Technologies |
.NET | mdbx.NET | Jerry Wang |
Nowadays MDBX works on Linux and OS'es compliant with POSIX.1-2008, but also support Windows (since Windows XP) as a complementary platform. Support for other OS could be implemented on commercial basis. However such enhancements (i.e. pull requests) could be accepted in mainstream only when corresponding public and free Continuous Integration service will be available.
Contents
Overview
libmdbx is an embedded lightweight key-value database engine oriented for performance under Linux and Windows.
libmdbx allows multiple processes to read and update several key-value tables concurrently, while being ACID-compliant, with minimal overhead and Olog(N) operation cost.
libmdbx enforce serializability for writers by single mutex and affords wait-free for parallel readers without atomic/interlocked operations, while writing and reading transactions do not block each other.
libmdbx can guarantee consistency after crash depending of operation mode.
libmdbx uses B+Trees and Memory-Mapping, doesn't use WAL which might be a caveat for some workloads.
Comparison with other DBs
For now please refer to chapter of "BoltDB comparison with other databases" which is also (mostly) applicable to MDBX.
History
The libmdbx design is based on Lightning Memory-Mapped Database. Initial development was going in ReOpenLDAP project. About a year later libmdbx was isolated to separate project, which was presented at Highload++ 2015 conference.
Since early 2017 libmdbx is used in Fast Positive Tables, and development is funded by Positive Technologies.
Acknowledgments
Howard Chu (Symas Corporation) - the author of LMDB, from which originated the MDBX in 2015.
Martin Hedenfalk martin@bzero.se - the author of btree.c
code, which
was used for begin development of LMDB.
Main features
libmdbx inherits all features and characteristics from LMDB:
-
Key-value pairs are stored in ordered map(s), keys are always sorted, range lookups are supported.
-
Data is memory-mapped into each worker DB process, and could be accessed zero-copy from transactions.
-
Transactions are ACID-compliant, through to MVCC and CoW. Writes are strongly serialized and aren't blocked by reads, transactions can't conflict with each other. Reads are guaranteed to get only commited data (relaxing serializability).
-
Read transactions are non-blocking, don't use atomic operations. Readers don't block each other and aren't blocked by writers. Read performance scales linearly with CPU core count.
Nonetheless, "connect to DB" (starting the first read transaction in a thread) and "disconnect from DB" (closing DB or thread termination) requires a lock acquisition to register/unregister at the "readers table".
-
Keys with multiple values are stored efficiently without key duplication, sorted by value, including integers (valuable for secondary indexes).
-
Efficient operation on short fixed length keys, including 32/64-bit integer types.
-
WAF (Write Amplification Factor) и RAF (Read Amplification Factor) are Olog(N).
-
No WAL and transaction journal. In case of a crash no recovery needed. No need for regular maintenance. Backups can be made on the fly on working DB without freezing writers.
-
No additional memory management, all done by basic OS services.
Improvements over LMDB
-
Automatic dynamic DB size management according to the parameters specified by
mdbx_env_set_geometry()
function. Including growth step and truncation threshold, as well as the choice of page size. -
Automatic returning of freed pages into unallocated space at the end of database file, with optionally automatic shrinking it. This reduces amount of pages resides in RAM and circulated in disk I/O. In fact libmdbx constantly performs DB compactification, without spending additional resources for that.
-
LIFO RECLAIM
mode:The newest pages are picked for reuse instead of the oldest. This allows to minimize reclaim loop and make it execution time independent of total page count.
This results in OS kernel cache mechanisms working with maximum efficiency. In case of using disk controllers or storages with BBWC this may greatly improve write performance.
-
Fast estimation of range query result size via functions
mdbx_estimate_range()
,mdbx_estimate_move()
andmdbx_estimate_distance()
. E.g. for selection the optimal query execution plan. -
mdbx_chk
tool for DB integrity check. -
Support for keys and values of zero length, including multi-values (aka sorted duplicates).
-
Ability to assign up to 3 persistent 64-bit markers to commiting transaction with
mdbx_canary_put()
and then get them in read transaction bymdbx_canary_get()
. -
Ability to update or delete record and get previous value via
mdbx_replace()
. Also allows update the specific item from multi-value with the same key. -
Sequence generation via
mdbx_dbi_sequence()
. -
OOM-KICK
callback.mdbx_env_set_oomfunc()
allows to set a callback, which will be called in the event of DB space exhausting during long-time read transaction in parallel with extensive updating. Callback will be invoked with PID and pthread_id of offending thread as parameters. Callback can do any of these things to remedy the problem:-
wait for read transaction to finish normally;
-
kill the offending process (signal 9), if separate process is doing long-time read;
-
abort or restart offending read transaction if it's running in sibling thread;
-
abort current write transaction with returning error code.
-
-
Ability to open DB in exclusive mode by
MDBX_EXCLUSIVE
flag. -
Ability to get how far current read-transaction snapshot lags from the latest version of the DB by
mdbx_txn_straggler()
. -
Ability to explicitly update the existing record, not insertion a new one. Implemented as
MDBX_CURRENT
flag formdbx_put()
. -
Fixed
mdbx_cursor_count()
, which returns correct count of duplicated (aka multi-value) for all cases and any cursor position. -
mdbx_env_info()
to getting additional info, including number of the oldest snapshot of DB, which is used by someone of the readers. -
mdbx_del()
doesn't ignore additional argument (specifier)data
for tables without duplicates (without flagMDBX_DUPSORT
), ifdata
is not null then always uses it to verify record, which is being deleted. -
Ability to open dbi-table with simultaneous with race-free setup of comparators for keys and values, via
mdbx_dbi_open_ex()
. -
mdbx_is_dirty()
to find out if given key or value is on dirty page, that useful to avoid copy-out before updates. -
Correct update of current record in
MDBX_CURRENT
mode ofmdbx_cursor_put()
, including sorted duplicated. -
Check if there is a row with data after current cursor position via
mdbx_cursor_eof()
. -
Additional error code
MDBX_EMULTIVAL
, which is returned bymdbx_put()
andmdbx_replace()
in case is ambiguous update or delete. -
Ability to get value by key and duplicates count by
mdbx_get_ex()
. -
Functions
mdbx_cursor_on_first()
andmdbx_cursor_on_last()
, which allows to check cursor is currently on first or last position respectively. -
Automatic creation of steady commit-points (flushing data to the disk) when the volume of changes reaches a threshold, which can be set by
mdbx_env_set_syncbytes()
. -
Control over debugging and receiving of debugging messages via
mdbx_setup_debug()
. -
Function
mdbx_env_pgwalk()
for page-walking the DB. -
Three meta-pages instead of two, that allows to guarantee consistency of data when updating weak commit-points without the risk of damaging the last steady commit-point.
-
Guarantee of DB integrity in
WRITEMAP+MAPSYNC
mode:
Current libmdbx gives a choice of safe async-write mode (default) and
UTTERLY_NOSYNC
mode which may result in full DB corruption during system crash as with LMDB. For details see Data safety in async-write mode.
-
Ability to close DB in "dirty" state (without data flush and creation of steady synchronization point) via
mdbx_env_close_ex()
. -
If read transaction is aborted via
mdbx_txn_abort()
ormdbx_txn_reset()
then DBI-handles, which were opened during it, will not be closed or deleted. In several cases this allows to avoid hard-to-debug errors. -
All cursors in all read and write transactions can be reused by
mdbx_cursor_renew()
and MUST be freed explicitly.
Caution, please pay attention!
This is the only change of API, which changes semantics of cursor management and can lead to memory leaks on misuse. This is a needed change as it eliminates ambiguity which helps to avoid such errors as:
- use-after-free;
- double-free;
- memory corruption and segfaults.
Gotchas
-
There cannot be more than one writer at a time. This allows serialize an updates and eliminate any possibility of conflicts, deadlocks or logical errors.
-
No WAL means relatively big WAF (Write Amplification Factor). Because of this syncing data to disk might be quite resource intensive and be main performance bottleneck during intensive write workload.
As compromise libmdbx allows several modes of lazy and/or periodic syncing, including
MAPASYNC
mode, which modificate data in memory and asynchronously syncs data to disk, moment to sync is picked by OS.Although this should be used with care, synchronous transactions in a DB with transaction journal will require 2 IOPS minimum (probably 3-4 in practice) because of filesystem overhead, overhead depends on filesystem, not on record count or record size. In libmdbx IOPS count will grow logarithmically depending on record count in DB (height of B+ tree) and will require at least 2 IOPS per transaction too.
- CoW for
MVCC
is done on memory page level with
B+trees.
Therefore altering data requires to copy about Olog(N) memory pages,
which uses memory bandwidth and is main
performance bottleneck in
MAPASYNC
mode.
This is unavoidable, but isn't that bad. Syncing data to disk requires much more similar operations which will be done by OS, therefore this is noticeable only if data sync to persistent storage is fully disabled. libmdbx allows to safely save data to persistent storage with minimal performance overhead. If there is no need to save data to persistent storage then it's much more preferable to use
std::map
.
- LMDB has a problem of long-time readers which degrades performance and bloats DB.
libmdbx addresses that, details below.
- LMDB is susceptible to DB corruption in
WRITEMAP+MAPASYNC
mode. libmdbx inWRITEMAP+MAPASYNC
guarantees DB integrity and consistency of data.
Additionally there is an alternative:
UTTERLY_NOSYNC
mode. Details below.
Problem of long-time reading
Garbage collection problem exists in all databases one way or another (e.g. VACUUM in PostgreSQL). But in libmdbx and LMDB it's even more discernible because of high transaction rate and intentional internals simplification in favor of performance.
Understanding the problem requires some explanation, but can be difficult for quick perception. So is is reasonable to simplify this as follows:
-
Massive altering of data during a parallel long read operation may exhaust the free DB space.
-
If the available space is exhausted, any attempt to update the data will cause a "MAP_FULL" error until a long read transaction is completed.
-
A good example of long readers is a hot backup or debugging of a client application while retaining an active read transaction.
-
In LMDB this results in degraded performance of all operations of writing data to persistent storage.
-
libmdbx has the
OOM-KICK
mechanism which allow to abort such operations and theLIFO RECLAIM
mode which addresses performance degradation.
Durability in asynchronous writing mode
In WRITEMAP+MAPSYNC
mode updated (aka dirty) pages are written
to persistent storage by the OS kernel. This means that if the
application fails, the OS kernel will finish writing all updated
data to disk and nothing will be lost.
However, in the case of hardware malfunction or OS kernel fatal error,
only some updated data can be written to disk and the database structure
is likely to be destroyed.
In such situation, DB is completely corrupted and can't be repaired.
libmdbx addresses this by fully reimplementing write path of data:
-
In
WRITEMAP+MAPSYNC
mode meta-data pages aren't updated in place, instead their shadow copies are used and their updates are synced after data is flushed to disk. -
During transaction commit libmdbx marks it as a steady or weak depending on synchronization status between RAM and persistent storage. For instance, in the
WRITEMAP+MAPSYNC
mode committed transactions are marked as weak by default, but as steady after explicit data flushes. -
libmdbx maintains three separate meta-pages instead of two. This allows to commit transaction as steady or weak without losing two previous commit points (one of them can be steady, and another weak). Thus, after a fatal system failure, it will be possible to rollback to the last steady commit point.
-
During DB open libmdbx rollbacks to the last steady commit point, this guarantees database integrity after a crash. However, if the database opening in read-only mode, such rollback cannot be performed which will cause returning the MDBX_WANNA_RECOVERY error.
For data integrity a pages which form database snapshot with steady commit point, must not be updated until next steady commit point. Therefore the last steady commit point creates an effect analogues to "long-time read". The only difference that now in case of space exhaustion the problem will be immediately addressed by writing changes to disk and forming the new steady commit point.
So in async-write mode libmdbx will always use new pages until the
free DB space will be exhausted or mdbx_env_sync()
will be invoked,
and the total write traffic to the disk will be the same as in sync-write mode.
Currently libmdbx gives a choice between a safe async-write mode (default) and
UTTERLY_NOSYNC
mode which may lead to DB corruption after a system crash, i.e. like the LMDB.
Next version of libmdbx will be automatically create steady commit points in async-write mode upon completion transfer data to the disk.
Performance comparison
All benchmarks were done by IOArena and multiple scripts runs on Lenovo Carbon-2 laptop, i7-4600U 2.1 GHz, 8 Gb RAM, SSD SAMSUNG MZNTD512HAGL-000L1 (DXT23L0Q) 512 Gb.
Integral performance
Here showed sum of performance metrics in 3 benchmarks:
-
Read/Search on 4 CPU cores machine;
-
Transactions with CRUD operations in sync-write mode (fdatasync is called after each transaction);
-
Transactions with CRUD operations in lazy-write mode (moment to sync data to persistent storage is decided by OS).
Reasons why asynchronous mode isn't benchmarked here:
-
It doesn't make sense as it has to be done with DB engines, oriented for keeping data in memory e.g. Tarantool, Redis), etc.
-
Performance gap is too high to compare in any meaningful way.
Read Scalability
Summary performance with concurrent read/search queries in 1-2-4-8 threads on 4 CPU cores machine.
Sync-write mode
-
Linear scale on left and dark rectangles mean arithmetic mean transactions per second;
-
Logarithmic scale on right is in seconds and yellow intervals mean execution time of transactions. Each interval shows minimal and maximum execution time, cross marks standard deviation.
10,000 transactions in sync-write mode. In case of a crash all data is consistent and state is right after last successful transaction. fdatasync syscall is used after each write transaction in this mode.
In the benchmark each transaction contains combined CRUD operations (2 inserts, 1 read, 1 update, 1 delete). Benchmark starts on empty database and after full run the database contains 10,000 small key-value records.
Lazy-write mode
-
Linear scale on left and dark rectangles mean arithmetic mean of thousands transactions per second;
-
Logarithmic scale on right in seconds and yellow intervals mean execution time of transactions. Each interval shows minimal and maximum execution time, cross marks standard deviation.
100,000 transactions in lazy-write mode. In case of a crash all data is consistent and state is right after one of last transactions, but transactions after it will be lost. Other DB engines use WAL or transaction journal for that, which in turn depends on order of operations in journaled filesystem. libmdbx doesn't use WAL and hands I/O operations to filesystem and OS kernel (mmap).
In the benchmark each transaction contains combined CRUD operations (2 inserts, 1 read, 1 update, 1 delete). Benchmark starts on empty database and after full run the database contains 100,000 small key-value records.
Async-write mode
-
Linear scale on left and dark rectangles mean arithmetic mean of thousands transactions per second;
-
Logarithmic scale on right in seconds and yellow intervals mean execution time of transactions. Each interval shows minimal and maximum execution time, cross marks standard deviation.
1,000,000 transactions in async-write mode. In case of a crash all data will be consistent and state will be right after one of last transactions, but lost transaction count is much higher than in lazy-write mode. All DB engines in this mode do as little writes as possible on persistent storage. libmdbx uses msync(MS_ASYNC) in this mode.
In the benchmark each transaction contains combined CRUD operations (2 inserts, 1 read, 1 update, 1 delete). Benchmark starts on empty database and after full run the database contains 10,000 small key-value records.
Cost comparison
Summary of used resources during lazy-write mode benchmarks:
-
Read and write IOPS;
-
Sum of user CPU time and sys CPU time;
-
Used space on persistent storage after the test and closed DB, but not waiting for the end of all internal housekeeping operations (LSM compactification, etc).
ForestDB is excluded because benchmark showed it's resource consumption for each resource (CPU, IOPS) much higher than other engines which prevents to meaningfully compare it with them.
All benchmark data is gathered by getrusage() syscall and by scanning data directory.
$ objdump -f -h -j .text libmdbx.so
libmdbx.so: file format elf64-x86-64
architecture: i386:x86-64, flags 0x00000150:
HAS_SYMS, DYNAMIC, D_PAGED
start address 0x0000000000003870
Sections:
Idx Name Size VMA LMA File off Algn
11 .text 000173d4 0000000000003870 0000000000003870 00003870 2**4
CONTENTS, ALLOC, LOAD, READONLY, CODE