mirror of
https://github.com/isar/libmdbx.git
synced 2025-01-31 10:58:20 +08:00
mdbx: merge branch 'master' into devel
.
This commit is contained in:
commit
b29c15f919
24
ChangeLog.md
24
ChangeLog.md
@ -1,19 +1,19 @@
|
|||||||
ChangeLog
|
ChangeLog
|
||||||
---------
|
---------
|
||||||
|
|
||||||
## v0.12.1 (scheduled to 2022-08-24)
|
## v0.12.1 (Positive Proxima) scheduled to 2022-08-24
|
||||||
|
|
||||||
The release with set of new features.
|
The planned frontward release with new superior features on the day of 20 anniversary of [Positive Technologies](https://ptsecurty.com).
|
||||||
|
|
||||||
New:
|
New:
|
||||||
|
|
||||||
- Added the `Big Foot` feature which significantly reduces GC overhead for processing large lists of retired pages from huge transactions.
|
- The `Big Foot` feature which significantly reduces GC overhead for processing large lists of retired pages from huge transactions.
|
||||||
Now _libmdbx_ avoid creating large chunks of PNLs (page number lists) which required a long sequences of free pages, aka large/overflow pages.
|
Now _libmdbx_ avoid creating large chunks of PNLs (page number lists) which required a long sequences of free pages, aka large/overflow pages.
|
||||||
Thus avoiding searching, allocating and storing such sequences inside GC.
|
Thus avoiding searching, allocating and storing such sequences inside GC.
|
||||||
- Added the `gcrtime_seconds16dot16` counter to the "Page Operation Statistics" that accumulates time spent for GC searching and reclaiming.
|
|
||||||
- Added the `MDBX_VALIDATION` environment options to extra validation of DB structure and pages content for carefully/safe handling damaged or untrusted DB.
|
|
||||||
- Improved hot/online validation and checking of database pages both for more robustness and performance.
|
- Improved hot/online validation and checking of database pages both for more robustness and performance.
|
||||||
- Added optionally cache for pointers to last/steady meta-pages (currently is off by default).
|
- New `MDBX_VALIDATION` environment options to extra validation of DB structure and pages content for carefully/safe handling damaged or untrusted DB.
|
||||||
|
- Optionally cache for pointers to last/steady meta-pages (currently is off by default).
|
||||||
|
- Added the `gcrtime_seconds16dot16` counter to the "Page Operation Statistics" that accumulates time spent for GC searching and reclaiming.
|
||||||
- Copy-with-compactification now clears/zeroes unused gaps inside database pages.
|
- Copy-with-compactification now clears/zeroes unused gaps inside database pages.
|
||||||
|
|
||||||
## v0.12.0 at 2022-06-19
|
## v0.12.0 at 2022-06-19
|
||||||
@ -24,9 +24,10 @@ Not a release but preparation for changing feature set and API.
|
|||||||
-------------------------------------------------------------------------------
|
-------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
## v0.11.9 (scheduled to 2022-08-02)
|
## v0.11.9 (Чирчик-1992) scheduled to 2022-08-02
|
||||||
|
|
||||||
The stable bugfix release.
|
The stable bugfix release.
|
||||||
|
It is planned that this will be the last release of the v0.11 branch.
|
||||||
|
|
||||||
New:
|
New:
|
||||||
|
|
||||||
@ -43,12 +44,15 @@ Minors:
|
|||||||
- Fixed copy&paste typo inside `meta_checktxnid()`.
|
- Fixed copy&paste typo inside `meta_checktxnid()`.
|
||||||
- Minor fix `meta_checktxnid()` to avoid assertion in debug mode.
|
- Minor fix `meta_checktxnid()` to avoid assertion in debug mode.
|
||||||
- Minor fix `mdbx_env_set_geometry()` to avoid returning `EINVAL` in particular rare cases.
|
- Minor fix `mdbx_env_set_geometry()` to avoid returning `EINVAL` in particular rare cases.
|
||||||
|
- Minor refine/fix batch-get testcase for large page size.
|
||||||
|
- Added `--pagesize NN` option to long-stotastic test script.
|
||||||
|
- Updated Valgrind-suppressions file for modern GCC.
|
||||||
|
|
||||||
|
|
||||||
-------------------------------------------------------------------------------
|
-------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
## v0.11.8 at 2022-06-12
|
## v0.11.8 (Baked Apple) at 2022-06-12
|
||||||
|
|
||||||
The stable release with an important fixes and workaround for the critical macOS thread-local-storage issue.
|
The stable release with an important fixes and workaround for the critical macOS thread-local-storage issue.
|
||||||
|
|
||||||
@ -74,6 +78,7 @@ Fixes:
|
|||||||
- Fixed `mdbx_check_fs_local()` for CDROM case on Windows.
|
- Fixed `mdbx_check_fs_local()` for CDROM case on Windows.
|
||||||
- Fixed nasty typo of typename which caused false `MDBX_CORRUPTED` error in a rare execution path,
|
- Fixed nasty typo of typename which caused false `MDBX_CORRUPTED` error in a rare execution path,
|
||||||
when the size of the thread-ID type not equal to 8.
|
when the size of the thread-ID type not equal to 8.
|
||||||
|
- Fixed Elbrus/E2K LCC 1.26 compiler warnings (memory model for atomic operations, etc).
|
||||||
- Fixed write-after-free memory corruption on latest `macOS` during finalization/cleanup of thread(s) that executed read transaction(s).
|
- Fixed write-after-free memory corruption on latest `macOS` during finalization/cleanup of thread(s) that executed read transaction(s).
|
||||||
> The issue was suddenly discovered by a [CI](https://en.wikipedia.org/wiki/Continuous_integration)
|
> The issue was suddenly discovered by a [CI](https://en.wikipedia.org/wiki/Continuous_integration)
|
||||||
> after adding an iteration with macOS 11 "Big Sur", and then reproduced on recent release of macOS 12 "Monterey".
|
> after adding an iteration with macOS 11 "Big Sur", and then reproduced on recent release of macOS 12 "Monterey".
|
||||||
@ -85,7 +90,6 @@ Fixes:
|
|||||||
> This is unexpected crazy-like behavior since the order of resources releasing/destroying
|
> This is unexpected crazy-like behavior since the order of resources releasing/destroying
|
||||||
> is not the reverse of ones acquiring/construction order. Nonetheless such surprise
|
> is not the reverse of ones acquiring/construction order. Nonetheless such surprise
|
||||||
> is now workarounded by using atomic compare-and-swap operations on a 64-bit signatures/cookies.
|
> is now workarounded by using atomic compare-and-swap operations on a 64-bit signatures/cookies.
|
||||||
- Fixed Elbrus/E2K LCC 1.26 compiler warnings (memory model for atomic operations, etc).
|
|
||||||
|
|
||||||
Minors:
|
Minors:
|
||||||
|
|
||||||
@ -101,7 +105,7 @@ Minors:
|
|||||||
-------------------------------------------------------------------------------
|
-------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
## v0.11.7 at 2022-04-22
|
## v0.11.7 (Resurrected Sarmat) at 2022-04-22
|
||||||
|
|
||||||
The stable risen release after the Github's intentional malicious disaster.
|
The stable risen release after the Github's intentional malicious disaster.
|
||||||
|
|
||||||
|
12
README.md
12
README.md
@ -394,20 +394,20 @@ since release the version 1.0.
|
|||||||
|
|
||||||
_libmdbx_ provides two official ways for integration in source code form:
|
_libmdbx_ provides two official ways for integration in source code form:
|
||||||
|
|
||||||
1. Using the amalgamated source code.
|
1. Using an amalgamated source code which available in the [releases section](https://gitflic.ru/project/erthink/libmdbx/release) on GitFlic.
|
||||||
> The amalgamated source code includes all files required to build and
|
> An amalgamated source code includes all files required to build and
|
||||||
> use _libmdbx_, but not for testing _libmdbx_ itself.
|
> use _libmdbx_, but not for testing _libmdbx_ itself.
|
||||||
|
> Beside the releases an amalgamated sources could be created any time from the original clone of git
|
||||||
|
> repository on Linux by executing `make dist`. As a result, the desired
|
||||||
|
> set of files will be formed in the `dist` subdirectory.
|
||||||
|
|
||||||
2. Adding the complete original source code as a `git submodule`.
|
2. Adding the complete source code as a `git submodule` from the [origin git repository](https://gitflic.ru/project/erthink/libmdbx) on GitFlic.
|
||||||
> This allows you to build as _libmdbx_ and testing tool.
|
> This allows you to build as _libmdbx_ and testing tool.
|
||||||
> On the other hand, this way requires you to pull git tags, and use C++11 compiler for test tool.
|
> On the other hand, this way requires you to pull git tags, and use C++11 compiler for test tool.
|
||||||
|
|
||||||
_**Please, avoid using any other techniques.**_ Otherwise, at least
|
_**Please, avoid using any other techniques.**_ Otherwise, at least
|
||||||
don't ask for support and don't name such chimeras `libmdbx`.
|
don't ask for support and don't name such chimeras `libmdbx`.
|
||||||
|
|
||||||
The amalgamated source code could be created from the original clone of git
|
|
||||||
repository on Linux by executing `make dist`. As a result, the desired
|
|
||||||
set of files will be formed in the `dist` subdirectory.
|
|
||||||
|
|
||||||
|
|
||||||
## Building and Testing
|
## Building and Testing
|
||||||
|
@ -14,6 +14,7 @@ GEOMETRY_JITTER=yes
|
|||||||
BANNER="$(which banner 2>/dev/null | echo echo)"
|
BANNER="$(which banner 2>/dev/null | echo echo)"
|
||||||
UNAME="$(uname -s 2>/dev/null || echo Unknown)"
|
UNAME="$(uname -s 2>/dev/null || echo Unknown)"
|
||||||
DB_UPTO_MB=17408
|
DB_UPTO_MB=17408
|
||||||
|
PAGESIZE=min
|
||||||
|
|
||||||
while [ -n "$1" ]
|
while [ -n "$1" ]
|
||||||
do
|
do
|
||||||
@ -33,6 +34,7 @@ do
|
|||||||
echo "--dir PATH Specifies directory for test DB and other files (it will be cleared)"
|
echo "--dir PATH Specifies directory for test DB and other files (it will be cleared)"
|
||||||
echo "--db-upto-mb NN Limits upper size of test DB to the NN megabytes"
|
echo "--db-upto-mb NN Limits upper size of test DB to the NN megabytes"
|
||||||
echo "--no-geometry-jitter Disable jitter for geometry upper-size"
|
echo "--no-geometry-jitter Disable jitter for geometry upper-size"
|
||||||
|
echo "--pagesize NN Use specified page size (256 is minimal and used by default) "
|
||||||
echo "--help Print this usage help and exit"
|
echo "--help Print this usage help and exit"
|
||||||
exit -2
|
exit -2
|
||||||
;;
|
;;
|
||||||
@ -109,6 +111,39 @@ do
|
|||||||
--no-geometry-jitter)
|
--no-geometry-jitter)
|
||||||
GEOMETRY_JITTER=no
|
GEOMETRY_JITTER=no
|
||||||
;;
|
;;
|
||||||
|
--pagesize)
|
||||||
|
case "$2" in
|
||||||
|
min|max|256|512|1024|2048|4096|8192|16386|32768|65536)
|
||||||
|
PAGESIZE=$2
|
||||||
|
;;
|
||||||
|
1|1k|1K|k|K)
|
||||||
|
PAGESIZE=$((1024*1))
|
||||||
|
;;
|
||||||
|
2|2k|2K)
|
||||||
|
PAGESIZE=$((1024*2))
|
||||||
|
;;
|
||||||
|
4|4k|4K)
|
||||||
|
PAGESIZE=$((1024*4))
|
||||||
|
;;
|
||||||
|
8|8k|8K)
|
||||||
|
PAGESIZE=$((1024*8))
|
||||||
|
;;
|
||||||
|
16|16k|16K)
|
||||||
|
PAGESIZE=$((1024*16))
|
||||||
|
;;
|
||||||
|
32|32k|32K)
|
||||||
|
PAGESIZE=$((1024*32))
|
||||||
|
;;
|
||||||
|
64|64k|64K)
|
||||||
|
PAGESIZE=$((1024*64))
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Invalig page size '$2'"
|
||||||
|
exit -2
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
shift
|
||||||
|
;;
|
||||||
*)
|
*)
|
||||||
echo "Unknown option '$1'"
|
echo "Unknown option '$1'"
|
||||||
exit -2
|
exit -2
|
||||||
@ -346,65 +381,65 @@ for nops in 10 33 100 333 1000 3333 10000 33333 100000 333333 1000000 3333333 10
|
|||||||
|
|
||||||
split=30
|
split=30
|
||||||
caption="Probe #$((++count)) int-key,with-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
caption="Probe #$((++count)) int-key,with-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
||||||
--pagesize=min --size-upper-upto=${db_size_mb}M --table=+key.integer,+data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
--pagesize=$PAGESIZE --size-upper-upto=${db_size_mb}M --table=+key.integer,+data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
||||||
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
||||||
--keygen.seed=${seed}
|
--keygen.seed=${seed}
|
||||||
caption="Probe #$((++count)) int-key,int-data, split=${split}, case $((++subcase)) of ${cases}" probe \
|
caption="Probe #$((++count)) int-key,int-data, split=${split}, case $((++subcase)) of ${cases}" probe \
|
||||||
--pagesize=min --size-upper-upto=${db_size_mb}M --table=+key.integer,+data.integer --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
--pagesize=$PAGESIZE --size-upper-upto=${db_size_mb}M --table=+key.integer,+data.integer --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
||||||
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
||||||
--keygen.seed=${seed}
|
--keygen.seed=${seed}
|
||||||
caption="Probe #$((++count)) with-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
caption="Probe #$((++count)) with-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
||||||
--pagesize=min --size-upper-upto=${db_size_mb}M --table=+data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
--pagesize=$PAGESIZE --size-upper-upto=${db_size_mb}M --table=+data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
||||||
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
||||||
--keygen.seed=${seed}
|
--keygen.seed=${seed}
|
||||||
|
|
||||||
split=24
|
split=24
|
||||||
caption="Probe #$((++count)) int-key,with-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
caption="Probe #$((++count)) int-key,with-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
||||||
--pagesize=min --size-upper-upto=${db_size_mb}M --table=+key.integer,+data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
--pagesize=$PAGESIZE --size-upper-upto=${db_size_mb}M --table=+key.integer,+data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
||||||
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
||||||
--keygen.seed=${seed}
|
--keygen.seed=${seed}
|
||||||
caption="Probe #$((++count)) int-key,int-data, split=${split}, case $((++subcase)) of ${cases}" probe \
|
caption="Probe #$((++count)) int-key,int-data, split=${split}, case $((++subcase)) of ${cases}" probe \
|
||||||
--pagesize=min --size-upper-upto=${db_size_mb}M --table=+key.integer,+data.integer --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
--pagesize=$PAGESIZE --size-upper-upto=${db_size_mb}M --table=+key.integer,+data.integer --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
||||||
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
||||||
--keygen.seed=${seed}
|
--keygen.seed=${seed}
|
||||||
caption="Probe #$((++count)) with-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
caption="Probe #$((++count)) with-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
||||||
--pagesize=min --size-upper-upto=${db_size_mb}M --table=+data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
--pagesize=$PAGESIZE --size-upper-upto=${db_size_mb}M --table=+data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
||||||
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
||||||
--keygen.seed=${seed}
|
--keygen.seed=${seed}
|
||||||
|
|
||||||
split=16
|
split=16
|
||||||
caption="Probe #$((++count)) int-key,w/o-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
caption="Probe #$((++count)) int-key,w/o-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
||||||
--pagesize=min --size-upper-upto=${db_size_mb}M --table=+key.integer,-data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=1111 \
|
--pagesize=$PAGESIZE --size-upper-upto=${db_size_mb}M --table=+key.integer,-data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=1111 \
|
||||||
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
||||||
--keygen.seed=${seed}
|
--keygen.seed=${seed}
|
||||||
caption="Probe #$((++count)) int-key,with-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
caption="Probe #$((++count)) int-key,with-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
||||||
--pagesize=min --size-upper-upto=${db_size_mb}M --table=+key.integer,+data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
--pagesize=$PAGESIZE --size-upper-upto=${db_size_mb}M --table=+key.integer,+data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
||||||
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
||||||
--keygen.seed=${seed}
|
--keygen.seed=${seed}
|
||||||
caption="Probe #$((++count)) int-key,int-data, split=${split}, case $((++subcase)) of ${cases}" probe \
|
caption="Probe #$((++count)) int-key,int-data, split=${split}, case $((++subcase)) of ${cases}" probe \
|
||||||
--pagesize=min --size-upper-upto=${db_size_mb}M --table=+key.integer,+data.integer --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
--pagesize=$PAGESIZE --size-upper-upto=${db_size_mb}M --table=+key.integer,+data.integer --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
||||||
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
||||||
--keygen.seed=${seed}
|
--keygen.seed=${seed}
|
||||||
caption="Probe #$((++count)) w/o-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
caption="Probe #$((++count)) w/o-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
||||||
--pagesize=min --size-upper-upto=${db_size_mb}M --table=-data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=1111 \
|
--pagesize=$PAGESIZE --size-upper-upto=${db_size_mb}M --table=-data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=1111 \
|
||||||
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
||||||
--keygen.seed=${seed}
|
--keygen.seed=${seed}
|
||||||
caption="Probe #$((++count)) with-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
caption="Probe #$((++count)) with-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
||||||
--pagesize=min --size-upper-upto=${db_size_mb}M --table=+data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
--pagesize=$PAGESIZE --size-upper-upto=${db_size_mb}M --table=+data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
||||||
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
||||||
--keygen.seed=${seed}
|
--keygen.seed=${seed}
|
||||||
|
|
||||||
split=4
|
split=4
|
||||||
caption="Probe #$((++count)) int-key,w/o-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
caption="Probe #$((++count)) int-key,w/o-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
||||||
--pagesize=min --size-upper-upto=${db_size_mb}M --table=+key.integer,-data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=1111 \
|
--pagesize=$PAGESIZE --size-upper-upto=${db_size_mb}M --table=+key.integer,-data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=1111 \
|
||||||
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
||||||
--keygen.seed=${seed}
|
--keygen.seed=${seed}
|
||||||
caption="Probe #$((++count)) int-key,int-data, split=${split}, case $((++subcase)) of ${cases}" probe \
|
caption="Probe #$((++count)) int-key,int-data, split=${split}, case $((++subcase)) of ${cases}" probe \
|
||||||
--pagesize=min --size-upper-upto=${db_size_mb}M --table=+key.integer,+data.integer --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
--pagesize=$PAGESIZE --size-upper-upto=${db_size_mb}M --table=+key.integer,+data.integer --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=max \
|
||||||
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
||||||
--keygen.seed=${seed}
|
--keygen.seed=${seed}
|
||||||
caption="Probe #$((++count)) w/o-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
caption="Probe #$((++count)) w/o-dups, split=${split}, case $((++subcase)) of ${cases}" probe \
|
||||||
--pagesize=min --size-upper-upto=${db_size_mb}M --table=-data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=1111 \
|
--pagesize=$PAGESIZE --size-upper-upto=${db_size_mb}M --table=-data.dups --keygen.split=${split} --keylen.min=min --keylen.max=max --datalen.min=min --datalen.max=1111 \
|
||||||
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
--nops=$nops --batch.write=$wbatch --mode=$(bits2options $bits)${syncmodes[count%3]} \
|
||||||
--keygen.seed=${seed}
|
--keygen.seed=${seed}
|
||||||
done # options
|
done # options
|
||||||
|
66
test/test.cc
66
test/test.cc
@ -1189,28 +1189,29 @@ bool testcase::check_batch_get() {
|
|||||||
char dump_key[128], dump_value[128];
|
char dump_key[128], dump_value[128];
|
||||||
char dump_key_batch[128], dump_value_batch[128];
|
char dump_key_batch[128], dump_value_batch[128];
|
||||||
|
|
||||||
MDBX_cursor *cursor;
|
MDBX_cursor *check_cursor;
|
||||||
int err = mdbx_cursor_open(txn_guard.get(), dbi, &cursor);
|
int check_err = mdbx_cursor_open(txn_guard.get(), dbi, &check_cursor);
|
||||||
if (err != MDBX_SUCCESS)
|
if (check_err != MDBX_SUCCESS)
|
||||||
failure_perror("mdbx_cursor_open()", err);
|
failure_perror("mdbx_cursor_open()", check_err);
|
||||||
|
|
||||||
MDBX_cursor *batch_cursor;
|
MDBX_cursor *batch_cursor;
|
||||||
err = mdbx_cursor_open(txn_guard.get(), dbi, &batch_cursor);
|
int batch_err = mdbx_cursor_open(txn_guard.get(), dbi, &batch_cursor);
|
||||||
if (err != MDBX_SUCCESS)
|
if (batch_err != MDBX_SUCCESS)
|
||||||
failure_perror("mdbx_cursor_open()", err);
|
failure_perror("mdbx_cursor_open()", batch_err);
|
||||||
|
|
||||||
|
bool rc = true;
|
||||||
MDBX_val pairs[42];
|
MDBX_val pairs[42];
|
||||||
size_t count = 0xDeadBeef;
|
size_t count = 0xDeadBeef;
|
||||||
err = mdbx_cursor_get_batch(batch_cursor, &count, pairs, ARRAY_LENGTH(pairs),
|
MDBX_cursor_op batch_op;
|
||||||
MDBX_FIRST);
|
batch_err = mdbx_cursor_get_batch(batch_cursor, &count, pairs,
|
||||||
bool rc = true;
|
ARRAY_LENGTH(pairs), batch_op = MDBX_FIRST);
|
||||||
size_t i, n = 0;
|
size_t i, n = 0;
|
||||||
while (err == MDBX_SUCCESS) {
|
while (batch_err == MDBX_SUCCESS || batch_err == MDBX_RESULT_TRUE) {
|
||||||
for (i = 0; i < count; i += 2) {
|
for (i = 0; i < count; i += 2) {
|
||||||
mdbx::slice k, v;
|
mdbx::slice k, v;
|
||||||
int err2 = mdbx_cursor_get(cursor, &k, &v, MDBX_NEXT);
|
check_err = mdbx_cursor_get(check_cursor, &k, &v, MDBX_NEXT);
|
||||||
if (err2 != MDBX_SUCCESS)
|
if (check_err != MDBX_SUCCESS)
|
||||||
failure_perror("mdbx_cursor_open()", err2);
|
failure_perror("batch-verify: mdbx_cursor_get(MDBX_NEXT)", check_err);
|
||||||
if (k != pairs[i] || v != pairs[i + 1]) {
|
if (k != pairs[i] || v != pairs[i + 1]) {
|
||||||
log_error(
|
log_error(
|
||||||
"batch-get pair mismatch %zu/%zu: sequential{%s, %s} != "
|
"batch-get pair mismatch %zu/%zu: sequential{%s, %s} != "
|
||||||
@ -1224,29 +1225,32 @@ bool testcase::check_batch_get() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
n += i / 2;
|
n += i / 2;
|
||||||
err = mdbx_cursor_get_batch(batch_cursor, &count, pairs,
|
batch_op = (batch_err == MDBX_RESULT_TRUE) ? MDBX_GET_CURRENT : MDBX_NEXT;
|
||||||
ARRAY_LENGTH(pairs), MDBX_NEXT);
|
batch_err = mdbx_cursor_get_batch(batch_cursor, &count, pairs,
|
||||||
|
ARRAY_LENGTH(pairs), batch_op);
|
||||||
}
|
}
|
||||||
if (err != MDBX_NOTFOUND)
|
if (batch_err != MDBX_NOTFOUND) {
|
||||||
failure_perror("mdbx_cursor_get_batch()", err);
|
log_error("mdbx_cursor_get_batch(), op %u, err %d", batch_op, batch_err);
|
||||||
|
|
||||||
err = mdbx_cursor_eof(batch_cursor);
|
|
||||||
if (err != MDBX_RESULT_TRUE) {
|
|
||||||
log_error("batch-get %s cursor not-eof %d", "batch", err);
|
|
||||||
rc = false;
|
|
||||||
}
|
|
||||||
err = mdbx_cursor_on_last(batch_cursor);
|
|
||||||
if (err != MDBX_RESULT_TRUE) {
|
|
||||||
log_error("batch-get %s cursor not-on-last %d", "batch", err);
|
|
||||||
rc = false;
|
rc = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
err = mdbx_cursor_on_last(cursor);
|
batch_err = mdbx_cursor_eof(batch_cursor);
|
||||||
if (err != MDBX_RESULT_TRUE) {
|
if (batch_err != MDBX_RESULT_TRUE) {
|
||||||
log_error("batch-get %s cursor not-on-last %d", "checked", err);
|
log_error("batch-get %s-cursor not-eof %d", "batch", batch_err);
|
||||||
rc = false;
|
rc = false;
|
||||||
}
|
}
|
||||||
mdbx_cursor_close(cursor);
|
batch_err = mdbx_cursor_on_last(batch_cursor);
|
||||||
|
if (batch_err != MDBX_RESULT_TRUE) {
|
||||||
|
log_error("batch-get %s-cursor not-on-last %d", "batch", batch_err);
|
||||||
|
rc = false;
|
||||||
|
}
|
||||||
|
|
||||||
|
check_err = mdbx_cursor_on_last(check_cursor);
|
||||||
|
if (check_err != MDBX_RESULT_TRUE) {
|
||||||
|
log_error("batch-get %s-cursor not-on-last %d", "checked", check_err);
|
||||||
|
rc = false;
|
||||||
|
}
|
||||||
|
mdbx_cursor_close(check_cursor);
|
||||||
mdbx_cursor_close(batch_cursor);
|
mdbx_cursor_close(batch_cursor);
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
msync(start)
|
msync(start)
|
||||||
fun:msync
|
fun:msync
|
||||||
...
|
...
|
||||||
fun:mdbx_sync_locked
|
fun:mdbx_sync_locked*
|
||||||
}
|
}
|
||||||
{
|
{
|
||||||
msync-whole-mmap-2
|
msync-whole-mmap-2
|
||||||
@ -12,7 +12,7 @@
|
|||||||
msync(start)
|
msync(start)
|
||||||
fun:msync
|
fun:msync
|
||||||
...
|
...
|
||||||
fun:mdbx_env_sync_internal
|
fun:mdbx_env_sync_internal*
|
||||||
}
|
}
|
||||||
{
|
{
|
||||||
msync-whole-mmap-3
|
msync-whole-mmap-3
|
||||||
@ -20,7 +20,7 @@
|
|||||||
msync(start)
|
msync(start)
|
||||||
fun:msync
|
fun:msync
|
||||||
...
|
...
|
||||||
fun:mdbx_mapresize
|
fun:mdbx_mapresize*
|
||||||
}
|
}
|
||||||
{
|
{
|
||||||
msync-wipe-steady
|
msync-wipe-steady
|
||||||
@ -28,7 +28,7 @@
|
|||||||
msync(start)
|
msync(start)
|
||||||
fun:msync
|
fun:msync
|
||||||
...
|
...
|
||||||
fun:mdbx_wipe_steady
|
fun:mdbx_wipe_steady*
|
||||||
}
|
}
|
||||||
|
|
||||||
# memcmp() inside mdbx_iov_write() as workaround for todo4recovery://erased_by_github/libmdbx/issues/269
|
# memcmp() inside mdbx_iov_write() as workaround for todo4recovery://erased_by_github/libmdbx/issues/269
|
||||||
|
Loading…
x
Reference in New Issue
Block a user