You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After some discussion with @ShiKaiWi@Rachelint, this problem will arise when close shard, and table's compaction belonging to this shard still exists, this means there will be two nodes writing the same manifest file.
Expected behavior
No error
Additional Information
In theory, when a close shard request is received, it should release all resources(such as: WAL/manifest/object_store) before finish closing shard, only when those resources all are released, then we can open this shard in a new node.
The text was updated successfully, but these errors were encountered:
## Rationale
Close#1205
## Detailed Changes
- Add `TableStatus::Closed` to donate table is closed, and cancel
background jobs, such as compaction.
## Test Plan
Describe this problem
In our deployments with OBKV as wal implementation, we found following errors:
Server version
$ ceresdb-server --version
CeresDB Server
Version: 1.2.6-alpha
Git commit: 7f8faff
Git branch: main
Opt level: 3
Rustc version: 1.69.0-nightly
Target: x86_64-unknown-linux-gnu
Build date: 2023-08-17T09:04:30.077953531Z
Steps to reproduce
After some discussion with @ShiKaiWi @Rachelint, this problem will arise when close shard, and table's compaction belonging to this shard still exists, this means there will be two nodes writing the same manifest file.
Expected behavior
No error
Additional Information
In theory, when a close shard request is received, it should release all resources(such as: WAL/manifest/object_store) before finish closing shard, only when those resources all are released, then we can open this shard in a new node.
The text was updated successfully, but these errors were encountered: