mdbx: использование термина "таблица" вместо "sub-database".

This commit is contained in:
Леонид Юрьев (Leonid Yuriev) 2024-08-03 13:25:44 +03:00
parent dd5329c164
commit 57e558a57d
33 changed files with 430 additions and 429 deletions

View File

@ -138,7 +138,7 @@ if(EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/.git" AND
EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/src/sort.h" AND
EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/src/spill.c" AND
EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/src/spill.h" AND
EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/src/subdb.c" AND
EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/src/table.c" AND
EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/src/tls.c" AND
EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/src/tls.h" AND
EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/src/tools/chk.c" AND
@ -749,7 +749,7 @@ else()
"${MDBX_SOURCE_DIR}/sort.h"
"${MDBX_SOURCE_DIR}/spill.c"
"${MDBX_SOURCE_DIR}/spill.h"
"${MDBX_SOURCE_DIR}/subdb.c"
"${MDBX_SOURCE_DIR}/table.c"
"${MDBX_SOURCE_DIR}/tls.c"
"${MDBX_SOURCE_DIR}/tls.h"
"${MDBX_SOURCE_DIR}/tree.c"

View File

@ -75,7 +75,7 @@ and [by Yandex](https://translated.turbopages.org/proxy_u/ru-en.en/https/gitflic
- Функция `mdbx_preopen_snapinfo()` для получения информации о БД без
её открытия.
- Функция `mdbx_enumerate_subdb()` для получение информации
- Функция `mdbx_enumerate_tables()` для получение информации
об именованных пользовательских таблицах.
- Поддержка функций логирования обратного вызова без функционала
@ -131,6 +131,7 @@ and [by Yandex](https://translated.turbopages.org/proxy_u/ru-en.en/https/gitflic
Нарушение совместимости:
- Использование термина "таблица" вместо "subDb".
- Опция `MDBX_COALESCE` объявлена устаревшей, так как соответствующий функционал всегда включен начиная с предыдущей версии 0.12.
- Опция `MDBX_NOTLS` объявлена устаревшей и заменена на `MDBX_NOSTICKYTHREADS`.
- Опция сборки `MDBX_USE_VALGRIND` заменена на общепринятую `ENABLE_MEMCHECK`.

View File

@ -160,7 +160,7 @@ $ cc --version
[MVCC](https://en.wikipedia.org/wiki/Multiversion_concurrency_control)
and [CoW](https://en.wikipedia.org/wiki/Copy-on-write).
- Multiple key-value sub-databases within a single datafile.
- Multiple key-value tables/sub-databases within a single datafile.
- Range lookups, including range query estimation.
@ -204,7 +204,7 @@ transaction journal. No crash recovery needed. No maintenance is required.
- **Value size**: minimum `0`, maximum `2146435072` (`0x7FF00000`) bytes for maps, ≈½ pagesize for multimaps (`2022` bytes for default 4K pagesize, `32742` bytes for 64K pagesize).
- **Write transaction size**: up to `1327217884` pages (`4.944272` TiB for default 4K pagesize, `79.108351` TiB for 64K pagesize).
- **Database size**: up to `2147483648` pages (≈`8.0` TiB for default 4K pagesize, ≈`128.0` TiB for 64K pagesize).
- **Maximum sub-databases**: `32765`.
- **Maximum tables/sub-databases**: `32765`.
## Gotchas
@ -298,7 +298,7 @@ and/or optimize query execution plans.
11. Ability to determine whether the particular data is on a dirty page
or not, that allows to avoid copy-out before updates.
12. Extended information of whole-database, sub-databases, transactions, readers enumeration.
12. Extended information of whole-database, tables/sub-databases, transactions, readers enumeration.
> _libmdbx_ provides a lot of information, including dirty and leftover pages
> for a write transaction, reading lag and holdover space for read transactions.
@ -321,7 +321,7 @@ pair, to the first, to the last, or not set to anything.
## Other fixes and specifics
1. Fixed more than 10 significant errors, in particular: page leaks,
wrong sub-database statistics, segfault in several conditions,
wrong table/sub-database statistics, segfault in several conditions,
nonoptimal page merge strategy, updating an existing record with
a change in data size (including for multimap), etc.

View File

@ -14,12 +14,12 @@ So currently most of the links are broken due to noted malicious ~~Github~~ sabo
- [Migration guide from LMDB to MDBX](https://libmdbx.dqdkfa.ru/dead-github/issues/199).
- [Support for RAW devices](https://libmdbx.dqdkfa.ru/dead-github/issues/124).
- [Support MessagePack for Keys & Values](https://libmdbx.dqdkfa.ru/dead-github/issues/115).
- [Engage new terminology](https://libmdbx.dqdkfa.ru/dead-github/issues/137).
- Packages for [Astra Linux](https://astralinux.ru/), [ALT Linux](https://www.altlinux.org/), [ROSA Linux](https://www.rosalinux.ru/), etc.
Done
----
- [Engage new terminology](https://libmdbx.dqdkfa.ru/dead-github/issues/137).
- [More flexible support of asynchronous runtime/framework(s)](https://libmdbx.dqdkfa.ru/dead-github/issues/200).
- [Move most of `mdbx_chk` functional to the library API](https://libmdbx.dqdkfa.ru/dead-github/issues/204).
- [Simple careful mode for working with corrupted DB](https://libmdbx.dqdkfa.ru/dead-github/issues/223).
@ -37,6 +37,6 @@ Canceled
ОС. Для этого необходимо снять отображение, изменить размер файла и
затем отобразить обратно. В свою очередь, для это необходимо
приостановить работающие с БД потоки выполняющие транзакции чтения, либо
готовые к такому выполнению. Но режиме MDBX_NOSTICKYTHREADS нет
готовые к такому выполнению. Но в режиме MDBX_NOSTICKYTHREADS нет
возможности отслеживать работающие с БД потоки, а приостановка всех
потоков неприемлема для большинства приложений.

428
mdbx.h

File diff suppressed because it is too large Load Diff

View File

@ -3537,8 +3537,8 @@ enum put_mode {
/// instances, but does not destroys the represented underlying object from the
/// own class destructor.
///
/// An environment supports multiple key-value sub-databases (aka key-value
/// spaces or tables), all residing in the same shared-memory map.
/// An environment supports multiple key-value tables (aka key-value
/// maps, spaces or sub-databases), all residing in the same shared-memory map.
class LIBMDBX_API_TYPE env {
friend class txn;
@ -4101,7 +4101,7 @@ public:
/// environment is busy by other thread or none of the thresholds are reached.
bool poll_sync_to_disk() { return sync_to_disk(false, true); }
/// \brief Close a key-value map (aka sub-database) handle. Normally
/// \brief Close a key-value map (aka table) handle. Normally
/// unnecessary.
///
/// Closing a database handle is not necessary, but lets \ref txn::open_map()
@ -4519,12 +4519,12 @@ public:
#endif /* __cpp_lib_string_view >= 201606L */
using map_stat = ::MDBX_stat;
/// \brief Returns statistics for a sub-database.
/// \brief Returns statistics for a table.
inline map_stat get_map_stat(map_handle map) const;
/// \brief Returns depth (bitmask) information of nested dupsort (multi-value)
/// B+trees for given database.
inline uint32_t get_tree_deepmask(map_handle map) const;
/// \brief Returns information about key-value map (aka sub-database) handle.
/// \brief Returns information about key-value map (aka table) handle.
inline map_handle::info get_handle_info(map_handle map) const;
using canary = ::MDBX_canary;
@ -4536,39 +4536,39 @@ public:
inline canary get_canary() const;
/// Reads sequence generator associated with a key-value map (aka
/// sub-database).
/// table).
inline uint64_t sequence(map_handle map) const;
/// \brief Reads and increment sequence generator associated with a key-value
/// map (aka sub-database).
/// map (aka table).
inline uint64_t sequence(map_handle map, uint64_t increment);
/// \brief Compare two keys according to a particular key-value map (aka
/// sub-database).
/// table).
inline int compare_keys(map_handle map, const slice &a,
const slice &b) const noexcept;
/// \brief Compare two values according to a particular key-value map (aka
/// sub-database).
/// table).
inline int compare_values(map_handle map, const slice &a,
const slice &b) const noexcept;
/// \brief Compare keys of two pairs according to a particular key-value map
/// (aka sub-database).
/// (aka table).
inline int compare_keys(map_handle map, const pair &a,
const pair &b) const noexcept;
/// \brief Compare values of two pairs according to a particular key-value map
/// (aka sub-database).
/// (aka table).
inline int compare_values(map_handle map, const pair &a,
const pair &b) const noexcept;
/// \brief Get value by key from a key-value map (aka sub-database).
/// \brief Get value by key from a key-value map (aka table).
inline slice get(map_handle map, const slice &key) const;
/// \brief Get first of multi-value and values count by key from a key-value
/// multimap (aka sub-database).
/// multimap (aka table).
inline slice get(map_handle map, slice key, size_t &values_count) const;
/// \brief Get value by key from a key-value map (aka sub-database).
/// \brief Get value by key from a key-value map (aka table).
inline slice get(map_handle map, const slice &key,
const slice &value_at_absence) const;
/// \brief Get first of multi-value and values count by key from a key-value
/// multimap (aka sub-database).
/// multimap (aka table).
inline slice get(map_handle map, slice key, size_t &values_count,
const slice &value_at_absence) const;
/// \brief Get value for equal or great key from a database.

View File

@ -41,7 +41,7 @@
#include "range-estimate.c"
#include "refund.c"
#include "spill.c"
#include "subdb.c"
#include "table.c"
#include "tls.c"
#include "tree.c"
#include "txl.c"

View File

@ -599,7 +599,7 @@ int mdbx_cursor_get_batch(MDBX_cursor *mc, size_t *count, MDBX_val *pairs,
return MDBX_BAD_DBI;
if (unlikely(mc->subcur))
return MDBX_INCOMPATIBLE /* must be a non-dupsort subDB */;
return MDBX_INCOMPATIBLE /* must be a non-dupsort table */;
switch (op) {
case MDBX_NEXT:

View File

@ -81,7 +81,7 @@ __cold static int audit_ex_locked(MDBX_txn *txn, size_t retired_stored,
ctx.used = NUM_METAS + audit_db_used(dbi_dig(txn, FREE_DBI, nullptr)) +
audit_db_used(dbi_dig(txn, MAIN_DBI, nullptr));
rc = mdbx_enumerate_subdb(txn, audit_dbi, &ctx);
rc = mdbx_enumerate_tables(txn, audit_dbi, &ctx);
tASSERT(txn, rc == MDBX_SUCCESS);
for (size_t dbi = CORE_DBS; dbi < txn->n_dbi; ++dbi) {

170
src/chk.c
View File

@ -14,12 +14,12 @@ typedef struct MDBX_chk_internal {
bool write_locked;
uint8_t scope_depth;
MDBX_chk_subdb_t subdb_gc, subdb_main;
MDBX_chk_table_t table_gc, table_main;
int16_t *pagemap;
MDBX_chk_subdb_t *last_lookup;
MDBX_chk_table_t *last_lookup;
const void *last_nested;
MDBX_chk_scope_t scope_stack[12];
MDBX_chk_subdb_t *subdb[MDBX_MAX_DBI + CORE_DBS];
MDBX_chk_table_t *table[MDBX_MAX_DBI + CORE_DBS];
MDBX_envinfo envinfo;
troika_t troika;
@ -485,17 +485,17 @@ __cold static const char *chk_v2a(MDBX_chk_internal_t *chk,
}
__cold static void chk_dispose(MDBX_chk_internal_t *chk) {
assert(chk->subdb[FREE_DBI] == &chk->subdb_gc);
assert(chk->subdb[MAIN_DBI] == &chk->subdb_main);
for (size_t i = 0; i < ARRAY_LENGTH(chk->subdb); ++i) {
MDBX_chk_subdb_t *const sdb = chk->subdb[i];
assert(chk->table[FREE_DBI] == &chk->table_gc);
assert(chk->table[MAIN_DBI] == &chk->table_main);
for (size_t i = 0; i < ARRAY_LENGTH(chk->table); ++i) {
MDBX_chk_table_t *const sdb = chk->table[i];
if (sdb) {
chk->subdb[i] = nullptr;
if (chk->cb->subdb_dispose && sdb->cookie) {
chk->cb->subdb_dispose(chk->usr, sdb);
chk->table[i] = nullptr;
if (chk->cb->table_dispose && sdb->cookie) {
chk->cb->table_dispose(chk->usr, sdb);
sdb->cookie = nullptr;
}
if (sdb != &chk->subdb_gc && sdb != &chk->subdb_main) {
if (sdb != &chk->table_gc && sdb != &chk->table_main) {
osal_free(sdb);
}
}
@ -640,7 +640,7 @@ histogram_print(MDBX_chk_scope_t *scope, MDBX_chk_line_t *line,
//-----------------------------------------------------------------------------
__cold static int chk_get_sdb(MDBX_chk_scope_t *const scope,
const walk_sdb_t *in, MDBX_chk_subdb_t **out) {
const walk_sdb_t *in, MDBX_chk_table_t **out) {
MDBX_chk_internal_t *const chk = scope->internal;
if (chk->last_lookup &&
chk->last_lookup->name.iov_base == in->name.iov_base) {
@ -648,15 +648,15 @@ __cold static int chk_get_sdb(MDBX_chk_scope_t *const scope,
return MDBX_SUCCESS;
}
for (size_t i = 0; i < ARRAY_LENGTH(chk->subdb); ++i) {
MDBX_chk_subdb_t *sdb = chk->subdb[i];
for (size_t i = 0; i < ARRAY_LENGTH(chk->table); ++i) {
MDBX_chk_table_t *sdb = chk->table[i];
if (!sdb) {
sdb = osal_calloc(1, sizeof(MDBX_chk_subdb_t));
sdb = osal_calloc(1, sizeof(MDBX_chk_table_t));
if (unlikely(!sdb)) {
*out = nullptr;
return chk_error_rc(scope, MDBX_ENOMEM, "alloc_subDB");
return chk_error_rc(scope, MDBX_ENOMEM, "alloc_table");
}
chk->subdb[i] = sdb;
chk->table[i] = sdb;
sdb->flags = in->internal->flags;
sdb->id = -1;
sdb->name = in->name;
@ -665,16 +665,16 @@ __cold static int chk_get_sdb(MDBX_chk_scope_t *const scope,
if (sdb->id < 0) {
sdb->id = (int)i;
sdb->cookie =
chk->cb->subdb_filter
? chk->cb->subdb_filter(chk->usr, &sdb->name, sdb->flags)
chk->cb->table_filter
? chk->cb->table_filter(chk->usr, &sdb->name, sdb->flags)
: (void *)(intptr_t)-1;
}
*out = (chk->last_lookup = sdb);
return MDBX_SUCCESS;
}
}
chk_scope_issue(scope, "too many subDBs > %u",
(unsigned)ARRAY_LENGTH(chk->subdb) - CORE_DBS - /* meta */ 1);
chk_scope_issue(scope, "too many tables > %u",
(unsigned)ARRAY_LENGTH(chk->table) - CORE_DBS - /* meta */ 1);
*out = nullptr;
return MDBX_PROBLEM;
}
@ -751,7 +751,7 @@ chk_pgvisitor(const size_t pgno, const unsigned npages, void *const ctx,
MDBX_chk_context_t *const usr = chk->usr;
MDBX_env *const env = usr->env;
MDBX_chk_subdb_t *sdb;
MDBX_chk_table_t *sdb;
int err = chk_get_sdb(scope, sdb_info, &sdb);
if (unlikely(err))
return err;
@ -773,7 +773,7 @@ chk_pgvisitor(const size_t pgno, const unsigned npages, void *const ctx,
height -= sdb_info->internal->height;
else {
chk_object_issue(scope, "nested tree", pgno, "unexpected",
"subDb %s flags 0x%x, deep %i", chk_v2a(chk, &sdb->name),
"table %s flags 0x%x, deep %i", chk_v2a(chk, &sdb->name),
sdb->flags, deep);
nested = nullptr;
}
@ -804,7 +804,7 @@ chk_pgvisitor(const size_t pgno, const unsigned npages, void *const ctx,
histogram_acc(npages, &sdb->histogram.large_pages);
if (sdb->flags & MDBX_DUPSORT)
chk_object_issue(scope, "page", pgno, "unexpected",
"type %u, subDb %s flags 0x%x, deep %i",
"type %u, table %s flags 0x%x, deep %i",
(unsigned)pagetype, chk_v2a(chk, &sdb->name), sdb->flags,
deep);
break;
@ -821,7 +821,7 @@ chk_pgvisitor(const size_t pgno, const unsigned npages, void *const ctx,
case page_dupfix_leaf:
if (!nested)
chk_object_issue(scope, "page", pgno, "unexpected",
"type %u, subDb %s flags 0x%x, deep %i",
"type %u, table %s flags 0x%x, deep %i",
(unsigned)pagetype, chk_v2a(chk, &sdb->name), sdb->flags,
deep);
/* fall through */
@ -832,7 +832,7 @@ chk_pgvisitor(const size_t pgno, const unsigned npages, void *const ctx,
sdb->pages.leaf += 1;
if (height != sdb_info->internal->height)
chk_object_issue(scope, "page", pgno, "wrong tree height",
"actual %i != %i subDb %s", height,
"actual %i != %i table %s", height,
sdb_info->internal->height, chk_v2a(chk, &sdb->name));
} else {
pagetype_caption =
@ -855,7 +855,7 @@ chk_pgvisitor(const size_t pgno, const unsigned npages, void *const ctx,
sdb->pages.nested_subleaf += 1;
if ((sdb->flags & MDBX_DUPSORT) == 0 || nested)
chk_object_issue(scope, "page", pgno, "unexpected",
"type %u, subDb %s flags 0x%x, deep %i",
"type %u, table %s flags 0x%x, deep %i",
(unsigned)pagetype, chk_v2a(chk, &sdb->name), sdb->flags,
deep);
break;
@ -888,8 +888,8 @@ chk_pgvisitor(const size_t pgno, const unsigned npages, void *const ctx,
deep);
sdb->pages.all += 1;
} else if (chk->pagemap[spanpgno]) {
const MDBX_chk_subdb_t *const rival =
chk->subdb[chk->pagemap[spanpgno] - 1];
const MDBX_chk_table_t *const rival =
chk->table[chk->pagemap[spanpgno] - 1];
chk_object_issue(scope, "page", spanpgno,
(branch && rival == sdb) ? "loop" : "already used",
"%s-page: by %s, deep %i", pagetype_caption,
@ -978,11 +978,11 @@ __cold static int chk_tree(MDBX_chk_scope_t *const scope) {
if (!chk->pagemap[n])
usr->result.unused_pages += 1;
MDBX_chk_subdb_t total;
MDBX_chk_table_t total;
memset(&total, 0, sizeof(total));
total.pages.all = NUM_METAS;
for (size_t i = 0; i < ARRAY_LENGTH(chk->subdb) && chk->subdb[i]; ++i) {
MDBX_chk_subdb_t *const sdb = chk->subdb[i];
for (size_t i = 0; i < ARRAY_LENGTH(chk->table) && chk->table[i]; ++i) {
MDBX_chk_table_t *const sdb = chk->table[i];
total.payload_bytes += sdb->payload_bytes;
total.lost_bytes += sdb->lost_bytes;
total.pages.all += sdb->pages.all;
@ -1007,8 +1007,8 @@ __cold static int chk_tree(MDBX_chk_scope_t *const scope) {
err = chk_scope_restore(scope, err);
if (scope->verbosity > MDBX_chk_info) {
for (size_t i = 0; i < ARRAY_LENGTH(chk->subdb) && chk->subdb[i]; ++i) {
MDBX_chk_subdb_t *const sdb = chk->subdb[i];
for (size_t i = 0; i < ARRAY_LENGTH(chk->table) && chk->table[i]; ++i) {
MDBX_chk_table_t *const sdb = chk->table[i];
MDBX_chk_scope_t *inner =
chk_scope_push(scope, 0, "tree %s:", chk_v2a(chk, &sdb->name));
if (sdb->pages.all == 0)
@ -1042,7 +1042,7 @@ __cold static int chk_tree(MDBX_chk_scope_t *const scope) {
}
line = histogram_dist(chk_line_feed(line), &sdb->histogram.deep,
"tree deep density", "1", false);
if (sdb != &chk->subdb_gc && sdb->histogram.nested_tree.count) {
if (sdb != &chk->table_gc && sdb->histogram.nested_tree.count) {
line = chk_print(chk_line_feed(line), "nested tree(s) %" PRIuSIZE,
sdb->histogram.nested_tree.count);
line = histogram_dist(line, &sdb->histogram.nested_tree, " density",
@ -1098,23 +1098,23 @@ __cold static int chk_tree(MDBX_chk_scope_t *const scope) {
}
typedef int(chk_kv_visitor)(MDBX_chk_scope_t *const scope,
MDBX_chk_subdb_t *sdb, const size_t record_number,
MDBX_chk_table_t *sdb, const size_t record_number,
const MDBX_val *key, const MDBX_val *data);
__cold static int chk_handle_kv(MDBX_chk_scope_t *const scope,
MDBX_chk_subdb_t *sdb,
MDBX_chk_table_t *sdb,
const size_t record_number, const MDBX_val *key,
const MDBX_val *data) {
MDBX_chk_internal_t *const chk = scope->internal;
int err = MDBX_SUCCESS;
assert(sdb->cookie);
if (chk->cb->subdb_handle_kv)
err = chk->cb->subdb_handle_kv(chk->usr, sdb, record_number, key, data);
if (chk->cb->table_handle_kv)
err = chk->cb->table_handle_kv(chk->usr, sdb, record_number, key, data);
return err ? err : chk_check_break(scope);
}
__cold static int chk_db(MDBX_chk_scope_t *const scope, MDBX_dbi dbi,
MDBX_chk_subdb_t *sdb, chk_kv_visitor *handler) {
MDBX_chk_table_t *sdb, chk_kv_visitor *handler) {
MDBX_chk_internal_t *const chk = scope->internal;
MDBX_chk_context_t *const usr = chk->usr;
MDBX_env *const env = usr->env;
@ -1365,34 +1365,34 @@ __cold static int chk_db(MDBX_chk_scope_t *const scope, MDBX_dbi dbi,
if (dbi != MAIN_DBI || (sdb->flags & (MDBX_DUPSORT | MDBX_DUPFIXED |
MDBX_REVERSEDUP | MDBX_INTEGERDUP)))
chk_object_issue(scope, "entry", record_count,
"unexpected sub-database", "node-flags 0x%x",
"unexpected table", "node-flags 0x%x",
node_flags(node));
else if (data.iov_len != sizeof(tree_t))
chk_object_issue(scope, "entry", record_count,
"wrong sub-database node size",
"wrong table node size",
"node-size %" PRIuSIZE " != %" PRIuSIZE, data.iov_len,
sizeof(tree_t));
else if (scope->stage == MDBX_chk_maindb)
/* подсчитываем subDB при первом проходе */
/* подсчитываем table при первом проходе */
sub_databases += 1;
else {
/* обработка subDB при втором проходе */
/* обработка table при втором проходе */
tree_t aligned_db;
memcpy(&aligned_db, data.iov_base, sizeof(aligned_db));
walk_sdb_t sdb_info = {.name = key};
sdb_info.internal = &aligned_db;
MDBX_chk_subdb_t *subdb;
err = chk_get_sdb(scope, &sdb_info, &subdb);
MDBX_chk_table_t *table;
err = chk_get_sdb(scope, &sdb_info, &table);
if (unlikely(err))
goto bailout;
if (subdb->cookie) {
if (table->cookie) {
err = chk_scope_begin(
chk, 0, MDBX_chk_subdbs, subdb, &usr->result.problems_kv,
"Processing subDB %s...", chk_v2a(chk, &subdb->name));
chk, 0, MDBX_chk_tables, table, &usr->result.problems_kv,
"Processing table %s...", chk_v2a(chk, &table->name));
if (likely(!err)) {
err = chk_db(usr->scope, (MDBX_dbi)-1, subdb, chk_handle_kv);
err = chk_db(usr->scope, (MDBX_dbi)-1, table, chk_handle_kv);
if (err != MDBX_EINTR && err != MDBX_RESULT_TRUE)
usr->result.subdb_processed += 1;
usr->result.table_processed += 1;
}
err = chk_scope_restore(scope, err);
if (unlikely(err))
@ -1400,7 +1400,7 @@ __cold static int chk_db(MDBX_chk_scope_t *const scope, MDBX_dbi dbi,
} else
chk_line_end(chk_flush(
chk_print(chk_line_begin(scope, MDBX_chk_processing),
"Skip processing %s...", chk_v2a(chk, &subdb->name))));
"Skip processing %s...", chk_v2a(chk, &table->name))));
}
} else if (handler) {
err = handler(scope, sdb, record_count, &key, &data);
@ -1430,16 +1430,16 @@ bailout:
chk_line_end(line);
}
if (scope->stage == MDBX_chk_maindb)
usr->result.subdb_total = sub_databases;
if (chk->cb->subdb_conclude)
err = chk->cb->subdb_conclude(usr, sdb, cursor, err);
usr->result.table_total = sub_databases;
if (chk->cb->table_conclude)
err = chk->cb->table_conclude(usr, sdb, cursor, err);
MDBX_chk_line_t *line = chk_line_begin(scope, MDBX_chk_resolution);
line = chk_print(line, "summary: %" PRIuSIZE " records,", record_count);
if (dups || (sdb->flags & (MDBX_DUPSORT | MDBX_DUPFIXED |
MDBX_REVERSEDUP | MDBX_INTEGERDUP)))
line = chk_print(line, " %" PRIuSIZE " dups,", dups);
if (sub_databases || dbi == MAIN_DBI)
line = chk_print(line, " %" PRIuSIZE " sub-databases,", sub_databases);
line = chk_print(line, " %" PRIuSIZE " tables,", sub_databases);
line = chk_print(line,
" %" PRIuSIZE " key's bytes,"
" %" PRIuSIZE " data's bytes,"
@ -1457,12 +1457,12 @@ bailout:
}
__cold static int chk_handle_gc(MDBX_chk_scope_t *const scope,
MDBX_chk_subdb_t *sdb,
MDBX_chk_table_t *sdb,
const size_t record_number, const MDBX_val *key,
const MDBX_val *data) {
MDBX_chk_internal_t *const chk = scope->internal;
MDBX_chk_context_t *const usr = chk->usr;
assert(sdb == &chk->subdb_gc);
assert(sdb == &chk->table_gc);
(void)sdb;
const char *bad = "";
pgno_t *iptr = data->iov_base;
@ -1532,9 +1532,9 @@ __cold static int chk_handle_gc(MDBX_chk_scope_t *const scope,
if (id == 0)
chk->pagemap[pgno] = -1 /* mark the pgno listed in GC */;
else if (id > 0) {
assert(id - 1 <= (intptr_t)ARRAY_LENGTH(chk->subdb));
assert(id - 1 <= (intptr_t)ARRAY_LENGTH(chk->table));
chk_object_issue(scope, "page", pgno, "already used", "by %s",
chk_v2a(chk, &chk->subdb[id - 1]->name));
chk_v2a(chk, &chk->table[id - 1]->name));
} else
chk_object_issue(scope, "page", pgno, "already listed in GC",
nullptr);
@ -1832,13 +1832,13 @@ __cold static int env_chk(MDBX_chk_scope_t *const scope) {
usr->result.problems_gc = usr->result.gc_tree_problems));
else {
err = chk_scope_begin(
chk, -1, MDBX_chk_gc, &chk->subdb_gc, &usr->result.problems_gc,
chk, -1, MDBX_chk_gc, &chk->table_gc, &usr->result.problems_gc,
"Processing %s by txn#%" PRIaTXN "...", subj_gc, txn->txnid);
if (likely(!err))
err = chk_db(usr->scope, FREE_DBI, &chk->subdb_gc, chk_handle_gc);
err = chk_db(usr->scope, FREE_DBI, &chk->table_gc, chk_handle_gc);
line = chk_line_begin(scope, MDBX_chk_info);
if (line) {
histogram_print(scope, line, &chk->subdb_gc.histogram.nested_tree,
histogram_print(scope, line, &chk->table_gc.histogram.nested_tree,
"span(s)", "single", false);
chk_line_end(line);
}
@ -1970,32 +1970,32 @@ __cold static int env_chk(MDBX_chk_scope_t *const scope) {
subj_main, subj_tree,
usr->result.problems_kv = usr->result.kv_tree_problems));
else {
err = chk_scope_begin(chk, 0, MDBX_chk_maindb, &chk->subdb_main,
err = chk_scope_begin(chk, 0, MDBX_chk_maindb, &chk->table_main,
&usr->result.problems_kv, "Processing %s...",
subj_main);
if (likely(!err))
err = chk_db(usr->scope, MAIN_DBI, &chk->subdb_main, chk_handle_kv);
err = chk_db(usr->scope, MAIN_DBI, &chk->table_main, chk_handle_kv);
chk_scope_restore(scope, err);
const char *const subj_subdbs = "sub-database(s)";
if (usr->result.problems_kv && usr->result.subdb_total)
const char *const subj_tables = "table(s)";
if (usr->result.problems_kv && usr->result.table_total)
chk_line_end(chk_print(chk_line_begin(scope, MDBX_chk_processing),
"Skip processing %s", subj_subdbs));
else if (usr->result.problems_kv == 0 && usr->result.subdb_total == 0)
"Skip processing %s", subj_tables));
else if (usr->result.problems_kv == 0 && usr->result.table_total == 0)
chk_line_end(chk_print(chk_line_begin(scope, MDBX_chk_info), "No %s",
subj_subdbs));
else if (usr->result.problems_kv == 0 && usr->result.subdb_total) {
subj_tables));
else if (usr->result.problems_kv == 0 && usr->result.table_total) {
err = chk_scope_begin(
chk, 1, MDBX_chk_subdbs, nullptr, &usr->result.problems_kv,
"Processing %s by txn#%" PRIaTXN "...", subj_subdbs, txn->txnid);
chk, 1, MDBX_chk_tables, nullptr, &usr->result.problems_kv,
"Processing %s by txn#%" PRIaTXN "...", subj_tables, txn->txnid);
if (!err)
err = chk_db(usr->scope, MAIN_DBI, &chk->subdb_main, nullptr);
err = chk_db(usr->scope, MAIN_DBI, &chk->table_main, nullptr);
if (usr->scope->subtotal_issues)
chk_line_end(chk_print(chk_line_begin(usr->scope, MDBX_chk_resolution),
"processed %" PRIuSIZE " of %" PRIuSIZE
" %s, %" PRIuSIZE " problems(s)",
usr->result.subdb_processed,
usr->result.subdb_total, subj_subdbs,
usr->result.table_processed,
usr->result.table_total, subj_tables,
usr->scope->subtotal_issues));
}
chk_scope_restore(scope, err);
@ -2035,20 +2035,20 @@ __cold int mdbx_env_chk(MDBX_env *env, const struct MDBX_chk_callbacks *cb,
chk->usr->env = env;
chk->flags = flags;
chk->subdb_gc.id = -1;
chk->subdb_gc.name.iov_base = MDBX_CHK_GC;
chk->subdb[FREE_DBI] = &chk->subdb_gc;
chk->table_gc.id = -1;
chk->table_gc.name.iov_base = MDBX_CHK_GC;
chk->table[FREE_DBI] = &chk->table_gc;
chk->subdb_main.id = -1;
chk->subdb_main.name.iov_base = MDBX_CHK_MAIN;
chk->subdb[MAIN_DBI] = &chk->subdb_main;
chk->table_main.id = -1;
chk->table_main.name.iov_base = MDBX_CHK_MAIN;
chk->table[MAIN_DBI] = &chk->table_main;
chk->monotime_timeout =
timeout_seconds_16dot16
? osal_16dot16_to_monotime(timeout_seconds_16dot16) + osal_monotime()
: 0;
chk->usr->scope_nesting = 0;
chk->usr->result.subdbs = (const void *)&chk->subdb;
chk->usr->result.tables = (const void *)&chk->table;
MDBX_chk_scope_t *const top = chk->scope_stack;
top->verbosity = verbosity;
@ -2080,8 +2080,8 @@ __cold int mdbx_env_chk(MDBX_env *env, const struct MDBX_chk_callbacks *cb,
// doit
if (likely(!rc)) {
chk->subdb_gc.flags = ctx->txn->dbs[FREE_DBI].flags;
chk->subdb_main.flags = ctx->txn->dbs[MAIN_DBI].flags;
chk->table_gc.flags = ctx->txn->dbs[FREE_DBI].flags;
chk->table_main.flags = ctx->txn->dbs[MAIN_DBI].flags;
rc = env_chk(top);
}

View File

@ -39,10 +39,10 @@ MDBX_MAYBE_UNUSED MDBX_INTERNAL bool pv2pages_verify(void);
* LEAF_NODE_MAX = even_floor(PAGESPACE / 2 - sizeof(indx_t));
* DATALEN_NO_OVERFLOW = LEAF_NODE_MAX - NODESIZE - KEYLEN_MAX;
*
* - SubDatabase-node must fit into one leaf-page:
* SUBDB_NAME_MAX = LEAF_NODE_MAX - node_hdr_len - sizeof(tree_t);
* - Table-node must fit into one leaf-page:
* TABLE_NAME_MAX = LEAF_NODE_MAX - node_hdr_len - sizeof(tree_t);
*
* - Dupsort values itself are a keys in a dupsort-subdb and couldn't be longer
* - Dupsort values itself are a keys in a dupsort-table and couldn't be longer
* than the KEYLEN_MAX. But dupsort node must not great than LEAF_NODE_MAX,
* since dupsort value couldn't be placed on a large/overflow page:
* DUPSORT_DATALEN_MAX = min(KEYLEN_MAX,

View File

@ -187,7 +187,7 @@ __cold static int stat_acc(const MDBX_txn *txn, MDBX_stat *st, size_t bytes) {
if (!(txn->dbs[MAIN_DBI].flags & MDBX_DUPSORT) &&
txn->dbs[MAIN_DBI].items /* TODO: use `md_subs` field */) {
/* scan and account not opened named subDBs */
/* scan and account not opened named tables */
err = tree_search(&cx.outer, nullptr, Z_FIRST);
while (err == MDBX_SUCCESS) {
const page_t *mp = cx.outer.pg[cx.outer.top];
@ -197,7 +197,7 @@ __cold static int stat_acc(const MDBX_txn *txn, MDBX_stat *st, size_t bytes) {
continue;
if (unlikely(node_ds(node) != sizeof(tree_t))) {
ERROR("%s/%d: %s %zu", "MDBX_CORRUPTED", MDBX_CORRUPTED,
"invalid subDb node size", node_ds(node));
"invalid table node size", node_ds(node));
return MDBX_CORRUPTED;
}

View File

@ -860,7 +860,7 @@ __hot int cursor_put(MDBX_cursor *mc, const MDBX_val *key, MDBX_val *data,
}
} else {
csr_t csr =
/* olddata may not be updated in case DUPFIX-page of dupfix-subDB */
/* olddata may not be updated in case DUPFIX-page of dupfix-table */
cursor_seek(mc, (MDBX_val *)key, &old_data, MDBX_SET);
rc = csr.err;
exact = csr.exact;
@ -878,7 +878,7 @@ __hot int cursor_put(MDBX_cursor *mc, const MDBX_val *key, MDBX_val *data,
eASSERT(env,
data->iov_len == 0 && (old_data.iov_len == 0 ||
/* olddata may not be updated in case
DUPFIX-page of dupfix-subDB */
DUPFIX-page of dupfix-table */
(mc->tree->flags & MDBX_DUPFIXED)));
return MDBX_SUCCESS;
}
@ -1630,7 +1630,7 @@ __hot int cursor_del(MDBX_cursor *mc, unsigned flags) {
/* If sub-DB still has entries, we're done */
if (mc->subcur->nested_tree.items) {
if (node_flags(node) & N_SUBDATA) {
/* update subDB info */
/* update table info */
mc->subcur->nested_tree.mod_txnid = mc->txn->txnid;
memcpy(node_data(node), &mc->subcur->nested_tree, sizeof(tree_t));
} else {

View File

@ -88,7 +88,7 @@ __noinline int dbi_import(MDBX_txn *txn, const size_t dbi) {
if (parent) {
/* вложенная пишущая транзакция */
int rc = dbi_check(parent, dbi);
/* копируем состояние subDB очищая new-флаги. */
/* копируем состояние table очищая new-флаги. */
eASSERT(env, txn->dbi_seqs == parent->dbi_seqs);
txn->dbi_state[dbi] =
parent->dbi_state[dbi] & ~(DBI_FRESH | DBI_CREAT | DBI_DIRTY);
@ -259,15 +259,15 @@ int dbi_bind(MDBX_txn *txn, const size_t dbi, unsigned user_flags,
/* Если dbi уже использовался, то корректными считаем четыре варианта:
* 1) user_flags равны MDBX_DB_ACCEDE
* = предполагаем что пользователь открывает существующую subDb,
* = предполагаем что пользователь открывает существующую table,
* при этом код проверки не позволит установить другие компараторы.
* 2) user_flags нулевые, а оба компаратора пустые/нулевые или равны текущим
* = предполагаем что пользователь открывает существующую subDb
* = предполагаем что пользователь открывает существующую table
* старым способом с нулевыми с флагами по-умолчанию.
* 3) user_flags совпадают, а компараторы не заданы или те же
* = предполагаем что пользователь открывает subDb указывая все параметры;
* 4) user_flags отличаются, но subDb пустая и задан флаг MDBX_CREATE
* = предполагаем что пользователь пересоздает subDb;
* = предполагаем что пользователь открывает table указывая все параметры;
* 4) user_flags отличаются, но table пустая и задан флаг MDBX_CREATE
* = предполагаем что пользователь пересоздает table;
*/
if ((user_flags & ~MDBX_CREATE) !=
(unsigned)(env->dbs_flags[dbi] & DB_PERSISTENT_FLAGS)) {
@ -291,7 +291,7 @@ int dbi_bind(MDBX_txn *txn, const size_t dbi, unsigned user_flags,
if (unlikely(txn->dbs[dbi].leaf_pages))
return /* FIXME: return extended info */ MDBX_INCOMPATIBLE;
/* Пересоздаём subDB если там пусто */
/* Пересоздаём table если там пусто */
if (unlikely(txn->cursors[dbi]))
return MDBX_DANGLING_DBI;
env->dbs_flags[dbi] = DB_POISON;
@ -463,7 +463,7 @@ static int dbi_open_locked(MDBX_txn *txn, unsigned user_flags, MDBX_dbi *dbi,
return MDBX_INCOMPATIBLE;
if (!MDBX_DISABLE_VALIDATION && unlikely(body.iov_len != sizeof(tree_t))) {
ERROR("%s/%d: %s %zu", "MDBX_CORRUPTED", MDBX_CORRUPTED,
"invalid subDb node size", body.iov_len);
"invalid table node size", body.iov_len);
return MDBX_CORRUPTED;
}
memcpy(&txn->dbs[slot], body.iov_base, sizeof(tree_t));
@ -977,8 +977,8 @@ __cold const tree_t *dbi_dig(const MDBX_txn *txn, const size_t dbi,
return fallback;
}
__cold int mdbx_enumerate_subdb(const MDBX_txn *txn, MDBX_subdb_enum_func *func,
void *ctx) {
__cold int mdbx_enumerate_tables(const MDBX_txn *txn,
MDBX_table_enum_func *func, void *ctx) {
if (unlikely(!func))
return MDBX_EINVAL;

View File

@ -96,7 +96,7 @@ typedef struct clc {
size_t lmin, lmax; /* min/max length constraints */
} clc_t;
/* Вспомогательная информация о subDB.
/* Вспомогательная информация о table.
*
* Совокупность потребностей:
* 1. Для транзакций и основного курсора нужны все поля.
@ -136,7 +136,7 @@ typedef struct clc2 {
struct kvx {
clc2_t clc;
MDBX_val name; /* имя subDB */
MDBX_val name; /* имя table */
};
/* Non-shared DBI state flags inside transaction */

View File

@ -191,7 +191,7 @@ typedef enum page_type {
*
* P_SUBP sub-pages are small leaf "pages" with duplicate data.
* A node with flag N_DUPDATA but not N_SUBDATA contains a sub-page.
* (Duplicate data can also go in sub-databases, which use normal pages.)
* (Duplicate data can also go in tables, which use normal pages.)
*
* P_META pages contain meta_t, the start point of an MDBX snapshot.
*
@ -225,7 +225,7 @@ typedef struct page {
* Leaf node flags describe node contents. N_BIGDATA says the node's
* data part is the page number of an overflow page with actual data.
* N_DUPDATA and N_SUBDATA can be combined giving duplicate data in
* a sub-page/sub-database, and named databases (just N_SUBDATA). */
* a sub-page/table, and named databases (just N_SUBDATA). */
typedef struct node {
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
union {
@ -255,7 +255,7 @@ typedef struct node {
typedef enum node_flags {
N_BIGDATA = 0x01 /* data put on large page */,
N_SUBDATA = 0x02 /* data is a sub-database */,
N_SUBDATA = 0x02 /* data is a table */,
N_DUPDATA = 0x04 /* data has duplicates */
} node_flags_t;

View File

@ -22,7 +22,7 @@ mdbx_chk \- MDBX checking tool
[\c
.BR \-i ]
[\c
.BI \-s \ subdb\fR]
.BI \-s \ table\fR]
.BR \ dbpath
.SH DESCRIPTION
The
@ -69,8 +69,8 @@ pages.
Ignore wrong order errors, which will likely false-positive if custom
comparator(s) was used.
.TP
.BR \-s \ subdb
Verify and show info only for a specific subdatabase.
.BR \-s \ table
Verify and show info only for a specific table.
.TP
.BR \-0 | \-1 | \-2
Using specific meta-page 0, or 2 for checking.

View File

@ -11,7 +11,7 @@ mdbx_drop \- MDBX database delete tool
[\c
.BR \-d ]
[\c
.BI \-s \ subdb\fR]
.BI \-s \ table\fR]
[\c
.BR \-n ]
.BR \ dbpath
@ -28,8 +28,8 @@ Write the library version number to the standard output, and exit.
.BR \-d
Delete the specified database, don't just empty it.
.TP
.BR \-s \ subdb
Operate on a specific subdatabase. If no database is specified, only the main database is dropped.
.BR \-s \ table
Operate on a specific table. If no table is specified, only the main table is dropped.
.TP
.BR \-n
Dump an MDBX database which does not use subdirectories.

View File

@ -19,7 +19,7 @@ mdbx_dump \- MDBX environment export tool
.BR \-p ]
[\c
.BR \-a \ |
.BI \-s \ subdb\fR]
.BI \-s \ table\fR]
[\c
.BR \-r ]
[\c
@ -58,10 +58,10 @@ are considered printing characters, and databases dumped in this manner may
be less portable to external systems.
.TP
.BR \-a
Dump all of the subdatabases in the environment.
Dump all of the tables in the environment.
.TP
.BR \-s \ subdb
Dump a specific subdatabase. If no database is specified, only the main database is dumped.
.BR \-s \ table
Dump a specific table. If no database is specified, only the main table is dumped.
.TP
.BR \-r
Rescure mode. Ignore some errors to dump corrupted DB.

View File

@ -16,7 +16,7 @@ mdbx_load \- MDBX environment import tool
[\c
.BI \-f \ file\fR]
[\c
.BI \-s \ subdb\fR]
.BI \-s \ table\fR]
[\c
.BR \-N ]
[\c
@ -71,11 +71,11 @@ on a database that uses custom compare functions.
.BR \-f \ file
Read from the specified file instead of from the standard input.
.TP
.BR \-s \ subdb
Load a specific subdatabase. If no database is specified, data is loaded into the main database.
.BR \-s \ table
Load a specific table. If no table is specified, data is loaded into the main table.
.TP
.BR \-N
Don't overwrite existing records when loading into an already existing database; just skip them.
Don't overwrite existing records when loading into an already existing table; just skip them.
.TP
.BR \-T
Load data from simple text files. The input must be paired lines of text, where the first

View File

@ -21,7 +21,7 @@ mdbx_stat \- MDBX environment status tool
.BR \-r [ r ]]
[\c
.BR \-a \ |
.BI \-s \ subdb\fR]
.BI \-s \ table\fR]
.BR \ dbpath
[\c
.BR \-n ]
@ -61,10 +61,10 @@ table and clear them. The reader table will be printed again
after the check is performed.
.TP
.BR \-a
Display the status of all of the subdatabases in the environment.
Display the status of all of the tables in the environment.
.TP
.BR \-s \ subdb
Display the status of a specific subdatabase.
.BR \-s \ table
Display the status of a specific table.
.TP
.BR \-n
Display the status of an MDBX database which does not use subdirectories.

View File

@ -84,9 +84,9 @@ int mdbx_dbi_sequence(MDBX_txn *txn, MDBX_dbi dbi, uint64_t *result,
* - изменить семантику установки/обновления mod_txnid, привязав его
* строго к изменению b-tree, но не атрибутов;
* - обновлять mod_txnid при фиксации вложенных транзакций;
* - для dbi-хендлов пользовательских subDb (видимо) можно оставить
* - для dbi-хендлов пользовательских table (видимо) можно оставить
* DBI_DIRTY в качестве признака необходимости обновления записи
* subDb в MainDB, при этом взводить DBI_DIRTY вместе с обновлением
* table в MainDB, при этом взводить DBI_DIRTY вместе с обновлением
* mod_txnid, в том числе при обновлении sequence.
* - для MAIN_DBI при обновлении sequence не следует взводить DBI_DIRTY
* и/или обновлять mod_txnid, а только взводить MDBX_TXN_DIRTY.
@ -163,7 +163,7 @@ __cold const char *mdbx_liberr2str(int errnum) {
"MDBX_BAD_TXN: Transaction is not valid for requested operation,"
" e.g. had errored and be must aborted, has a child, or is invalid",
"MDBX_BAD_VALSIZE: Invalid size or alignment of key or data"
" for target database, either invalid subDB name",
" for target database, either invalid table name",
"MDBX_BAD_DBI: The specified DBI-handle is invalid"
" or changed by another thread/transaction",
"MDBX_PROBLEM: Unexpected internal error, transaction should be aborted",
@ -206,7 +206,7 @@ __cold const char *mdbx_liberr2str(int errnum) {
" please keep one and remove unused other";
case MDBX_DANGLING_DBI:
return "MDBX_DANGLING_DBI: Some cursors and/or other resources should be"
" closed before subDb or corresponding DBI-handle could be (re)used";
" closed before table or corresponding DBI-handle could be (re)used";
case MDBX_OUSTED:
return "MDBX_OUSTED: The parked read transaction was outed for the sake"
" of recycling old MVCC snapshots";

View File

@ -96,14 +96,14 @@ MDBX_INTERNAL int __must_check_result env_page_auxbuffer(MDBX_env *env);
MDBX_INTERNAL unsigned env_setup_pagesize(MDBX_env *env, const size_t pagesize);
/* tree.c */
MDBX_INTERNAL int tree_drop(MDBX_cursor *mc, const bool may_have_subDBs);
MDBX_INTERNAL int tree_drop(MDBX_cursor *mc, const bool may_have_tables);
MDBX_INTERNAL int __must_check_result tree_rebalance(MDBX_cursor *mc);
MDBX_INTERNAL int __must_check_result tree_propagate_key(MDBX_cursor *mc,
const MDBX_val *key);
MDBX_INTERNAL void recalculate_merge_thresholds(MDBX_env *env);
MDBX_INTERNAL void recalculate_subpage_thresholds(MDBX_env *env);
/* subdb.c */
/* table.c */
MDBX_INTERNAL int __must_check_result sdb_fetch(MDBX_txn *txn, size_t dbi);
MDBX_INTERNAL int __must_check_result sdb_setup(const MDBX_env *env,
kvx_t *const kvx,

View File

@ -41,7 +41,7 @@ int sdb_fetch(MDBX_txn *txn, size_t dbi) {
rc = tree_search(&couple.outer, &kvx->name, 0);
if (unlikely(rc != MDBX_SUCCESS)) {
bailout:
NOTICE("dbi %zu refs to inaccessible subDB `%*s` for txn %" PRIaTXN
NOTICE("dbi %zu refs to inaccessible table `%*s` for txn %" PRIaTXN
" (err %d)",
dbi, (int)kvx->name.iov_len, (const char *)kvx->name.iov_base,
txn->txnid, rc);
@ -55,7 +55,7 @@ int sdb_fetch(MDBX_txn *txn, size_t dbi) {
goto bailout;
}
if (unlikely((node_flags(nsr.node) & (N_DUPDATA | N_SUBDATA)) != N_SUBDATA)) {
NOTICE("dbi %zu refs to not a named subDB `%*s` for txn %" PRIaTXN " (%s)",
NOTICE("dbi %zu refs to not a named table `%*s` for txn %" PRIaTXN " (%s)",
dbi, (int)kvx->name.iov_len, (const char *)kvx->name.iov_base,
txn->txnid, "wrong flags");
return MDBX_INCOMPATIBLE; /* not a named DB */
@ -67,7 +67,7 @@ int sdb_fetch(MDBX_txn *txn, size_t dbi) {
return rc;
if (unlikely(data.iov_len != sizeof(tree_t))) {
NOTICE("dbi %zu refs to not a named subDB `%*s` for txn %" PRIaTXN " (%s)",
NOTICE("dbi %zu refs to not a named table `%*s` for txn %" PRIaTXN " (%s)",
dbi, (int)kvx->name.iov_len, (const char *)kvx->name.iov_base,
txn->txnid, "wrong rec-size");
return MDBX_INCOMPATIBLE; /* not a named DB */
@ -78,7 +78,7 @@ int sdb_fetch(MDBX_txn *txn, size_t dbi) {
* have dropped and recreated the DB with other flags. */
tree_t *const db = &txn->dbs[dbi];
if (unlikely((db->flags & DB_PERSISTENT_FLAGS) != flags)) {
NOTICE("dbi %zu refs to the re-created subDB `%*s` for txn %" PRIaTXN
NOTICE("dbi %zu refs to the re-created table `%*s` for txn %" PRIaTXN
" with different flags (present 0x%X != wanna 0x%X)",
dbi, (int)kvx->name.iov_len, (const char *)kvx->name.iov_base,
txn->txnid, db->flags & DB_PERSISTENT_FLAGS, flags);

View File

@ -55,7 +55,7 @@ MDBX_env *env;
MDBX_txn *txn;
unsigned verbose = 0;
bool quiet;
MDBX_val only_subdb;
MDBX_val only_table;
int stuck_meta = -1;
MDBX_chk_context_t chk;
bool turn_meta = false;
@ -95,7 +95,7 @@ static bool silently(enum MDBX_chk_severity severity) {
chk.scope ? chk.scope->verbosity >> MDBX_chk_severity_prio_shift
: verbose + (MDBX_chk_result >> MDBX_chk_severity_prio_shift);
int prio = (severity >> MDBX_chk_severity_prio_shift);
if (chk.scope && chk.scope->stage == MDBX_chk_subdbs && verbose < 2)
if (chk.scope && chk.scope->stage == MDBX_chk_tables && verbose < 2)
prio += 1;
return quiet || cutoff < ((prio > 0) ? prio : 0);
}
@ -270,14 +270,14 @@ static void scope_pop(MDBX_chk_context_t *ctx, MDBX_chk_scope_t *scope,
flush();
}
static MDBX_chk_user_subdb_cookie_t *subdb_filter(MDBX_chk_context_t *ctx,
static MDBX_chk_user_table_cookie_t *table_filter(MDBX_chk_context_t *ctx,
const MDBX_val *name,
MDBX_db_flags_t flags) {
(void)ctx;
(void)flags;
return (!only_subdb.iov_base ||
(only_subdb.iov_len == name->iov_len &&
memcmp(only_subdb.iov_base, name->iov_base, name->iov_len) == 0))
return (!only_table.iov_base ||
(only_table.iov_len == name->iov_len &&
memcmp(only_table.iov_base, name->iov_base, name->iov_len) == 0))
? (void *)(intptr_t)-1
: nullptr;
}
@ -344,7 +344,7 @@ static void print_format(MDBX_chk_line_t *line, const char *fmt, va_list args) {
static const MDBX_chk_callbacks_t cb = {.check_break = check_break,
.scope_push = scope_push,
.scope_pop = scope_pop,
.subdb_filter = subdb_filter,
.table_filter = table_filter,
.stage_begin = stage_begin,
.stage_end = stage_end,
.print_begin = print_begin,
@ -357,7 +357,7 @@ static void usage(char *prog) {
fprintf(
stderr,
"usage: %s "
"[-V] [-v] [-q] [-c] [-0|1|2] [-w] [-d] [-i] [-s subdb] [-u|U] dbpath\n"
"[-V] [-v] [-q] [-c] [-0|1|2] [-w] [-d] [-i] [-s table] [-u|U] dbpath\n"
" -V\t\tprint version and exit\n"
" -v\t\tmore verbose, could be repeated upto 9 times for extra details\n"
" -q\t\tbe quiet\n"
@ -365,7 +365,7 @@ static void usage(char *prog) {
" -w\t\twrite-mode checking\n"
" -d\t\tdisable page-by-page traversal of B-tree\n"
" -i\t\tignore wrong order errors (for custom comparators case)\n"
" -s subdb\tprocess a specific subdatabase only\n"
" -s table\tprocess a specific subdatabase only\n"
" -u\t\twarmup database before checking\n"
" -U\t\twarmup and try lock database pages in memory before checking\n"
" -0|1|2\tforce using specific meta-page 0, or 2 for checking\n"
@ -380,7 +380,7 @@ static int conclude(MDBX_chk_context_t *ctx) {
if (ctx->result.total_problems == 1 && ctx->result.problems_meta == 1 &&
(chk_flags &
(MDBX_CHK_SKIP_BTREE_TRAVERSAL | MDBX_CHK_SKIP_KV_TRAVERSAL)) == 0 &&
(env_flags & MDBX_RDONLY) == 0 && !only_subdb.iov_base &&
(env_flags & MDBX_RDONLY) == 0 && !only_table.iov_base &&
stuck_meta < 0 && ctx->result.steady_txnid < ctx->result.recent_txnid) {
const size_t step_lineno =
print(MDBX_chk_resolution,
@ -399,7 +399,7 @@ static int conclude(MDBX_chk_context_t *ctx) {
if (turn_meta && stuck_meta >= 0 &&
(chk_flags &
(MDBX_CHK_SKIP_BTREE_TRAVERSAL | MDBX_CHK_SKIP_KV_TRAVERSAL)) == 0 &&
!only_subdb.iov_base &&
!only_table.iov_base &&
(env_flags & (MDBX_RDONLY | MDBX_EXCLUSIVE)) == MDBX_EXCLUSIVE) {
const bool successful_check =
(err | ctx->result.total_problems | ctx->result.problems_meta) == 0;
@ -529,11 +529,11 @@ int main(int argc, char *argv[]) {
chk_flags |= MDBX_CHK_SKIP_BTREE_TRAVERSAL;
break;
case 's':
if (only_subdb.iov_base && strcmp(only_subdb.iov_base, optarg))
if (only_table.iov_base && strcmp(only_table.iov_base, optarg))
usage(prog);
else {
only_subdb.iov_base = optarg;
only_subdb.iov_len = strlen(optarg);
only_table.iov_base = optarg;
only_table.iov_len = strlen(optarg);
}
break;
case 'i':
@ -574,7 +574,7 @@ int main(int argc, char *argv[]) {
"write-mode must be enabled to turn to the specified meta-page.");
rc = EXIT_INTERRUPTED;
}
if (only_subdb.iov_base || (chk_flags & (MDBX_CHK_SKIP_BTREE_TRAVERSAL |
if (only_table.iov_base || (chk_flags & (MDBX_CHK_SKIP_BTREE_TRAVERSAL |
MDBX_CHK_SKIP_KV_TRAVERSAL))) {
error_fmt(
"whole database checking with b-tree traversal are required to turn "

View File

@ -46,7 +46,7 @@ static void usage(void) {
" -V\t\tprint version and exit\n"
" -q\t\tbe quiet\n"
" -d\t\tdelete the specified database, don't just empty it\n"
" -s name\tdrop the specified named subDB\n"
" -s name\tdrop the specified named table\n"
" \t\tby default empty the main DB\n",
prog);
exit(EXIT_FAILURE);

View File

@ -215,16 +215,16 @@ static void usage(void) {
fprintf(
stderr,
"usage: %s "
"[-V] [-q] [-f file] [-l] [-p] [-r] [-a|-s subdb] [-u|U] "
"[-V] [-q] [-f file] [-l] [-p] [-r] [-a|-s table] [-u|U] "
"dbpath\n"
" -V\t\tprint version and exit\n"
" -q\t\tbe quiet\n"
" -f\t\twrite to file instead of stdout\n"
" -l\t\tlist subDBs and exit\n"
" -l\t\tlist tables and exit\n"
" -p\t\tuse printable characters\n"
" -r\t\trescue mode (ignore errors to dump corrupted DB)\n"
" -a\t\tdump main DB and all subDBs\n"
" -s name\tdump only the specified named subDB\n"
" -a\t\tdump main DB and all tables\n"
" -s name\tdump only the specified named table\n"
" -u\t\twarmup database before dumping\n"
" -U\t\twarmup and try lock database pages in memory before dumping\n"
" \t\tby default dump only the main DB\n",

View File

@ -477,10 +477,10 @@ static void usage(void) {
" -a\t\tappend records in input order (required for custom "
"comparators)\n"
" -f file\tread from file instead of stdin\n"
" -s name\tload into specified named subDB\n"
" -s name\tload into specified named table\n"
" -N\t\tdon't overwrite existing records when loading, just skip "
"ones\n"
" -p\t\tpurge subDB before loading\n"
" -p\t\tpurge table before loading\n"
" -T\t\tread plaintext\n"
" -r\t\trescue mode (ignore errors to load corrupted DB dump)\n"
" -n\t\tdon't use subdirectory for newly created database "

View File

@ -47,15 +47,15 @@ static void print_stat(MDBX_stat *ms) {
static void usage(const char *prog) {
fprintf(stderr,
"usage: %s [-V] [-q] [-e] [-f[f[f]]] [-r[r]] [-a|-s name] dbpath\n"
"usage: %s [-V] [-q] [-e] [-f[f[f]]] [-r[r]] [-a|-s table] dbpath\n"
" -V\t\tprint version and exit\n"
" -q\t\tbe quiet\n"
" -p\t\tshow statistics of page operations for current session\n"
" -e\t\tshow whole DB info\n"
" -f\t\tshow GC info\n"
" -r\t\tshow readers\n"
" -a\t\tprint stat of main DB and all subDBs\n"
" -s name\tprint stat of only the specified named subDB\n"
" -a\t\tprint stat of main DB and all tables\n"
" -s table\tprint stat of only the specified named table\n"
" \t\tby default print stat of only the main DB\n",
prog);
exit(EXIT_FAILURE);
@ -104,7 +104,7 @@ int main(int argc, char *argv[]) {
MDBX_envinfo mei;
prog = argv[0];
char *envname;
char *subname = nullptr;
char *table = nullptr;
bool alldbs = false, envinfo = false, pgop = false;
int freinfo = 0, rdrinfo = 0;
@ -143,7 +143,7 @@ int main(int argc, char *argv[]) {
pgop = true;
break;
case 'a':
if (subname)
if (table)
usage(prog);
alldbs = true;
break;
@ -161,7 +161,7 @@ int main(int argc, char *argv[]) {
case 's':
if (alldbs)
usage(prog);
subname = optarg;
table = optarg;
break;
default:
usage(prog);
@ -199,7 +199,7 @@ int main(int argc, char *argv[]) {
return EXIT_FAILURE;
}
if (alldbs || subname) {
if (alldbs || table) {
rc = mdbx_env_set_maxdbs(env, 2);
if (unlikely(rc != MDBX_SUCCESS)) {
error("mdbx_env_set_maxdbs", rc);
@ -327,7 +327,7 @@ int main(int argc, char *argv[]) {
} else
printf(" No stale readers.\n");
}
if (!(subname || alldbs || freinfo))
if (!(table || alldbs || freinfo))
goto txn_abort;
}
@ -450,7 +450,7 @@ int main(int argc, char *argv[]) {
printf(" GC: %" PRIaPGNO " pages\n", pages);
}
rc = mdbx_dbi_open(txn, subname, MDBX_DB_ACCEDE, &dbi);
rc = mdbx_dbi_open(txn, table, MDBX_DB_ACCEDE, &dbi);
if (unlikely(rc != MDBX_SUCCESS)) {
error("mdbx_dbi_open", rc);
goto txn_abort;
@ -462,7 +462,7 @@ int main(int argc, char *argv[]) {
error("mdbx_dbi_stat", rc);
goto txn_abort;
}
printf("Status of %s\n", subname ? subname : "Main DB");
printf("Status of %s\n", table ? table : "Main DB");
print_stat(&mst);
if (alldbs) {
@ -476,16 +476,16 @@ int main(int argc, char *argv[]) {
MDBX_val key;
while (MDBX_SUCCESS ==
(rc = mdbx_cursor_get(cursor, &key, nullptr, MDBX_NEXT_NODUP))) {
MDBX_dbi subdbi;
MDBX_dbi xdbi;
if (memchr(key.iov_base, '\0', key.iov_len))
continue;
subname = osal_malloc(key.iov_len + 1);
memcpy(subname, key.iov_base, key.iov_len);
subname[key.iov_len] = '\0';
rc = mdbx_dbi_open(txn, subname, MDBX_DB_ACCEDE, &subdbi);
table = osal_malloc(key.iov_len + 1);
memcpy(table, key.iov_base, key.iov_len);
table[key.iov_len] = '\0';
rc = mdbx_dbi_open(txn, table, MDBX_DB_ACCEDE, &xdbi);
if (rc == MDBX_SUCCESS)
printf("Status of %s\n", subname);
osal_free(subname);
printf("Status of %s\n", table);
osal_free(table);
if (unlikely(rc != MDBX_SUCCESS)) {
if (rc == MDBX_INCOMPATIBLE)
continue;
@ -493,14 +493,14 @@ int main(int argc, char *argv[]) {
goto txn_abort;
}
rc = mdbx_dbi_stat(txn, subdbi, &mst, sizeof(mst));
rc = mdbx_dbi_stat(txn, xdbi, &mst, sizeof(mst));
if (unlikely(rc != MDBX_SUCCESS)) {
error("mdbx_dbi_stat", rc);
goto txn_abort;
}
print_stat(&mst);
rc = mdbx_dbi_close(env, subdbi);
rc = mdbx_dbi_close(env, xdbi);
if (unlikely(rc != MDBX_SUCCESS)) {
error("mdbx_dbi_close", rc);
goto txn_abort;

View File

@ -49,15 +49,15 @@ void recalculate_merge_thresholds(MDBX_env *env) {
: bytes / 4 /* 25 % */));
}
int tree_drop(MDBX_cursor *mc, const bool may_have_subDBs) {
int tree_drop(MDBX_cursor *mc, const bool may_have_tables) {
MDBX_txn *txn = mc->txn;
int rc = tree_search(mc, nullptr, Z_FIRST);
if (likely(rc == MDBX_SUCCESS)) {
/* DUPSORT sub-DBs have no large-pages/subDBs. Omit scanning leaves.
/* DUPSORT sub-DBs have no large-pages/tables. Omit scanning leaves.
* This also avoids any P_DUPFIX pages, which have no nodes.
* Also if the DB doesn't have sub-DBs and has no large/overflow
* pages, omit scanning leaves. */
if (!(may_have_subDBs | mc->tree->large_pages))
if (!(may_have_tables | mc->tree->large_pages))
cursor_pop(mc);
rc = pnl_need(&txn->tw.retired_pages, (size_t)mc->tree->branch_pages +
@ -81,11 +81,11 @@ int tree_drop(MDBX_cursor *mc, const bool may_have_subDBs) {
rc = page_retire_ex(mc, node_largedata_pgno(node), nullptr, 0);
if (unlikely(rc != MDBX_SUCCESS))
goto bailout;
if (!(may_have_subDBs | mc->tree->large_pages))
if (!(may_have_tables | mc->tree->large_pages))
goto pop;
} else if (node_flags(node) & N_SUBDATA) {
if (unlikely((node_flags(node) & N_DUPDATA) == 0)) {
rc = /* disallowing implicit subDB deletion */ MDBX_INCOMPATIBLE;
rc = /* disallowing implicit table deletion */ MDBX_INCOMPATIBLE;
goto bailout;
}
rc = cursor_dupsort_setup(mc, node, mp);

View File

@ -685,7 +685,7 @@ int mdbx_txn_commit_ex(MDBX_txn *txn, MDBX_commit_latency *latency) {
txn->dbs[FREE_DBI].root);
if (txn->n_dbi > CORE_DBS) {
/* Update subDB root pointers */
/* Update table root pointers */
cursor_couple_t cx;
rc = cursor_init(&cx.outer, txn, MAIN_DBI);
if (unlikely(rc != MDBX_SUCCESS))

View File

@ -105,7 +105,7 @@ __cold static int walk_pgno(walk_ctx_t *ctx, walk_sdb_t *sdb, const pgno_t pgno,
case N_SUBDATA /* sub-db */: {
if (unlikely(node_data_size != sizeof(tree_t))) {
ERROR("%s/%d: %s %u", "MDBX_CORRUPTED", MDBX_CORRUPTED,
"invalid subDb node size", (unsigned)node_data_size);
"invalid table node size", (unsigned)node_data_size);
assert(err == MDBX_CORRUPTED);
err = MDBX_CORRUPTED;
}
@ -227,11 +227,11 @@ __cold static int walk_pgno(walk_ctx_t *ctx, walk_sdb_t *sdb, const pgno_t pgno,
} else {
tree_t aligned_db;
memcpy(&aligned_db, node_data(node), sizeof(aligned_db));
walk_sdb_t subdb = {{node_key(node), node_ks(node)}, nullptr, nullptr};
subdb.internal = &aligned_db;
walk_sdb_t table = {{node_key(node), node_ks(node)}, nullptr, nullptr};
table.internal = &aligned_db;
assert(err == MDBX_SUCCESS);
ctx->deep += 1;
err = walk_sdb(ctx, &subdb);
err = walk_sdb(ctx, &table);
ctx->deep -= 1;
}
break;

View File

@ -11,7 +11,7 @@ typedef struct walk_sdb {
} walk_sdb_t;
typedef int walk_func(const size_t pgno, const unsigned number, void *const ctx,
const int deep, const walk_sdb_t *subdb,
const int deep, const walk_sdb_t *table,
const size_t page_size, const page_type_t page_type,
const MDBX_error_t err, const size_t nentries,
const size_t payload_bytes, const size_t header_bytes,