Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: MaxTableSize has been renamed to BaseTableSize #2038

Merged
merged 1 commit into from
Jan 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion db.go
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ func checkAndSetOptions(opt *Options) error {
// the transaction APIs. Transaction batches entries into batches of size opt.maxBatchSize.
if opt.ValueThreshold > opt.maxBatchSize {
return errors.Errorf("Valuethreshold %d greater than max batch size of %d. Either "+
"reduce opt.ValueThreshold or increase opt.MaxTableSize.",
"reduce opt.ValueThreshold or increase opt.BaseTableSize.",
opt.ValueThreshold, opt.maxBatchSize)
}
// ValueLogFileSize should be stricly LESS than 2<<30 otherwise we will
Expand Down
2 changes: 1 addition & 1 deletion db_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1800,7 +1800,7 @@ func TestLSMOnly(t *testing.T) {

// Also test for error, when ValueThresholdSize is greater than maxBatchSize.
dopts.ValueThreshold = LSMOnlyOptions(dir).ValueThreshold
// maxBatchSize is calculated from MaxTableSize.
// maxBatchSize is calculated from BaseTableSize.
dopts.MemTableSize = LSMOnlyOptions(dir).ValueThreshold
_, err = Open(dopts)
require.Error(t, err, "db creation should have been failed")
Expand Down
2 changes: 1 addition & 1 deletion docs/content/faq/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ workloads, you should be using the `Transaction` API.

If you're using Badger with `SyncWrites=false`, then your writes might not be written to value log
and won't get synced to disk immediately. Writes to LSM tree are done inmemory first, before they
get compacted to disk. The compaction would only happen once `MaxTableSize` has been reached. So, if
get compacted to disk. The compaction would only happen once `BaseTableSize` has been reached. So, if
you're doing a few writes and then checking, you might not see anything on disk. Once you `Close`
the database, you'll see these writes on disk.

Expand Down
2 changes: 1 addition & 1 deletion docs/content/get-started/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -603,7 +603,7 @@ the `Options` struct that is passed in when opening the database using
- If you modify `Options.NumMemtables`, also adjust `Options.NumLevelZeroTables` and
`Options.NumLevelZeroTablesStall` accordingly.
- Number of concurrent compactions (`Options.NumCompactors`)
- Size of table (`Options.MaxTableSize`)
- Size of table (`Options.BaseTableSize`)
- Size of value log file (`Options.ValueLogFileSize`)

If you want to decrease the memory usage of Badger instance, tweak these
Expand Down
2 changes: 1 addition & 1 deletion options.go
Original file line number Diff line number Diff line change
Expand Up @@ -463,7 +463,7 @@ func (opt Options) WithLoggingLevel(val loggingLevel) Options {
return opt
}

// WithBaseTableSize returns a new Options value with MaxTableSize set to the given value.
// WithBaseTableSize returns a new Options value with BaseTableSize set to the given value.
//
// BaseTableSize sets the maximum size in bytes for LSM table or file in the base level.
//
Expand Down
2 changes: 1 addition & 1 deletion stream_writer_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -349,7 +349,7 @@ func TestStreamWriter6(t *testing.T) {
}
}

// list has 3 pairs for equal keys. Since each Key has size equal to MaxTableSize
// list has 3 pairs for equal keys. Since each Key has size equal to BaseTableSize
// we would have 6 tables, if keys are not equal. Here we should have 3 tables.
sw := db.NewStreamWriter()
require.NoError(t, sw.Prepare(), "sw.Prepare() failed")
Expand Down
Loading