-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there any limitation of WriteBatch? #5938
Comments
would you be able to use DeleteRange? |
I can't think of a limit before you run out of memory. But it's not a good practice to insert large write batch. It's hard to predict the performance impact. |
@siying any guidance on what constitutes "large"? |
@adamretter good question. I don't know. Thinking about you are writing one such an entry to WAL, and inserting them to memtable in one operation. In the mean time, we double the memory usage. Or course, the impact will be workload specific. Maybe they don't care about slowing down other writes and have plenty of DRAM to waste. But I would be really cautious when one write batch is larger than a few MBs. |
Thanks! We reduced the on big write batch to multiple small write batches to avoid the large memory usages. |
Hi,
I want to delete multiple key-val pairs (several millions to billion pairs) atomically from rocksdb by using WriteBatch method.
Is any limitation of the WriteBatch method, e.g. the maximum number of key-val pairs , memory usage or others?
Thanks
The text was updated successfully, but these errors were encountered: