Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drop table panicked using kafka #1543

Closed
linanh opened this issue Jul 15, 2024 · 2 comments · Fixed by #1550
Closed

Drop table panicked using kafka #1543

linanh opened this issue Jul 15, 2024 · 2 comments · Fixed by #1550
Labels
bug Something isn't working

Comments

@linanh
Copy link

linanh commented Jul 15, 2024

Describe this problem

Drop table panicked when using kafka, then process exit abnormal.
[src/components/panic_ext/src/lib.rs:57](B thread 'horaedb-meta' panicked 'attempt to add with overflow' at "src/wal/src/message_queue_impl/wal.rs:74"

Server version

Version: 2.0.0
Git commit: a1869dc
Git branch: main
Opt level: 3
Rustc version: 1.77.0-nightly
Target: x86_64-unknown-linux-gnu
Build date: 2024-06-24T20:26:38.246766624Z

Steps to reproduce

  1. create table demo
  2. drop table demo

Expected behavior

No response

Additional Information

2024-07-15T09:53:10.261198847Z (B2024-07-15 09:53:10.260(B ERRO(B [src/components/panic_ext/src/lib.rs:57](B thread 'horaedb-meta' panicked 'attempt to add with overflow' at "src/wal/src/message_queue_impl/wal.rs:74"
   0: panic_ext::set_panic_hook::{{closure}}
2024-07-15T09:53:10.261297023Z              at horaedb/src/components/panic_ext/src/lib.rs:56:18
   1: <alloc::boxed::Box<F,A> as core::ops::function::Fn<Args>>::call
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/alloc/src/boxed.rs:2029:9
      std::panicking::rust_panic_with_hook
2024-07-15T09:53:10.261349283Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/panicking.rs:785:13
   2: std::panicking::begin_panic_handler::{{closure}}
2024-07-15T09:53:10.261366843Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/panicking.rs:651:13
   3: std::sys_common::backtrace::__rust_end_short_backtrace
2024-07-15T09:53:10.261384733Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/sys_common/backtrace.rs:171:18
2024-07-15T09:53:10.261394490Z    4: rust_begin_unwind
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/panicking.rs:647:5
2024-07-15T09:53:10.261412093Z    5: core::panicking::panic_fmt
2024-07-15T09:53:10.261421168Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/core/src/panicking.rs:72:14
2024-07-15T09:53:10.261430290Z    6: core::panicking::panic
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/core/src/panicking.rs:144:5
   7: <wal::message_queue_impl::wal::MessageQueueImpl<M> as wal::manager::WalManager>::mark_delete_entries_up_to::{{closure}}
2024-07-15T09:53:10.261477869Z              at horaedb/src/wal/src/message_queue_impl/wal.rs:74:39
2024-07-15T09:53:10.261486948Z    8: <core::pin::Pin<P> as core::future::future::Future>::poll
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/core/src/future/future.rs:124:9
2024-07-15T09:53:10.261522465Z       analytic_engine::instance::drop::Dropper::drop::{{closure}}
             at horaedb/src/analytic_engine/src/instance/drop.rs:75:14
      analytic_engine::instance::engine::<impl analytic_engine::instance::Instance>::drop_table::{{closure}}
2024-07-15T09:53:10.261549821Z              at horaedb/src/analytic_engine/src/instance/engine.rs:392:31
      <analytic_engine::engine::TableEngineImpl as table_engine::engine::TableEngine>::drop_table::{{closure}}
2024-07-15T09:53:10.261580185Z              at horaedb/src/analytic_engine/src/engine.rs:143:67
   9: <core::pin::Pin<P> as core::future::future::Future>::poll
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/core/src/future/future.rs:124:9
2024-07-15T09:53:10.261633371Z       <table_engine::proxy::TableEngineProxy as table_engine::engine::TableEngine>::drop_table::{{closure}}
2024-07-15T09:53:10.261644227Z              at horaedb/src/table_engine/src/proxy.rs:73:71
  10: <core::pin::Pin<P> as core::future::future::Future>::poll
2024-07-15T09:53:10.261661803Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/core/src/future/future.rs:124:9
      <catalog_impls::volatile::SchemaImpl as catalog::schema::Schema>::drop_table::{{closure}}
             at horaedb/src/catalog_impls/src/volatile.rs:404:14
2024-07-15T09:53:10.261689160Z   11: <core::pin::Pin<P> as core::future::future::Future>::poll
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/core/src/future/future.rs:124:9
      <catalog_impls::cluster_based::SchemaWithCluster as catalog::schema::Schema>::drop_table::{{closure}}
             at horaedb/src/catalog_impls/src/cluster_based.rs:104:49
  12: <core::pin::Pin<P> as core::future::future::Future>::poll
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/core/src/future/future.rs:124:9
      catalog::table_operator::TableOperator::drop_table_on_shard::{{closure}}
             at horaedb/src/catalog/src/table_operator.rs:253:14
2024-07-15T09:53:10.261761369Z       cluster::shard_operator::ShardOperator::drop_table::{{closure}}
             at horaedb/src/cluster/src/shard_operator.rs:318:14
2024-07-15T09:53:10.261793079Z       cluster::shard_set::Shard::drop_table::{{closure}}
             at horaedb/src/cluster/src/shard_set.rs:165:34
2024-07-15T09:53:10.261810271Z       server::grpc::meta_event_service::handle_drop_table_on_shard::{{closure}}
             at horaedb/src/server/src/grpc/meta_event_service/mod.rs:561:10
      server::grpc::meta_event_service::MetaServiceImpl::drop_table_on_shard_internal::{{closure}}::{{closure}}
             at horaedb/src/server/src/grpc/meta_event_service/mod.rs:213:57
2024-07-15T09:53:10.261846429Z   13: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/core.rs:311:17
      tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
2024-07-15T09:53:10.261872797Z              at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/loom/std/unsafe_cell.rs:14:9
      tokio::runtime::task::core::Core<T,S>::poll
2024-07-15T09:53:10.261890079Z              at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/core.rs:300:30
2024-07-15T09:53:10.261914775Z       tokio::runtime::task::harness::poll_future::{{closure}}
2024-07-15T09:53:10.261923022Z              at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/harness.rs:476:19
2024-07-15T09:53:10.261931757Z       <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/core/src/panic/unwind_safe.rs:272:9
      std::panicking::try::do_call
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/panicking.rs:554:40
      std::panicking::try
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/panicking.rs:518:19
      std::panic::catch_unwind
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/panic.rs:142:14
2024-07-15T09:53:10.261999635Z       tokio::runtime::task::harness::poll_future
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/harness.rs:464:18
2024-07-15T09:53:10.262016676Z       tokio::runtime::task::harness::Harness<T,S>::poll_inner
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/harness.rs:198:27
      tokio::runtime::task::harness::Harness<T,S>::poll
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/harness.rs:152:15
2024-07-15T09:53:10.262109800Z       tokio::runtime::task::raw::poll
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/raw.rs:276:5
  14: tokio::runtime::task::raw::RawTask::poll
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/raw.rs:200:18
      tokio::runtime::task::LocalNotified<S>::run
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/mod.rs:400:9
2024-07-15T09:53:10.262164737Z       tokio::runtime::scheduler::multi_thread::worker::Context::run_task::{{closure}}
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/scheduler/multi_thread/worker.rs:576:18
2024-07-15T09:53:10.262182747Z       tokio::runtime::coop::with_budget
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/coop.rs:107:5
      tokio::runtime::coop::budget
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/coop.rs:73:5
2024-07-15T09:53:10.262217114Z       tokio::runtime::scheduler::multi_thread::worker::Context::run_task
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/scheduler/multi_thread/worker.rs:575:9
2024-07-15T09:53:10.262234767Z   15: tokio::runtime::scheduler::multi_thread::worker::Context::run
2024-07-15T09:53:10.262243415Z              at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/scheduler/multi_thread/worker.rs:526:24
      tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}::{{closure}}
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/scheduler/multi_thread/worker.rs:491:21
      tokio::runtime::context::scoped::Scoped<T>::set
2024-07-15T09:53:10.262279188Z              at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/context/scoped.rs:40:9
2024-07-15T09:53:10.262288320Z       tokio::runtime::context::set_scheduler::{{closure}}
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/context.rs:176:26
2024-07-15T09:53:10.262312118Z       std::thread::local::LocalKey<T>::try_with
2024-07-15T09:53:10.262323192Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/thread/local.rs:286:16
      std::thread::local::LocalKey<T>::with
2024-07-15T09:53:10.262339545Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/thread/local.rs:262:9
      tokio::runtime::context::set_scheduler
2024-07-15T09:53:10.262356079Z              at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/context.rs:176:17
      tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}
2024-07-15T09:53:10.262386506Z              at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/scheduler/multi_thread/worker.rs:486:9
      tokio::runtime::context::runtime::enter_runtime
2024-07-15T09:53:10.262406000Z              at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/context/runtime.rs:65:16
2024-07-15T09:53:10.262414668Z       tokio::runtime::scheduler::multi_thread::worker::run
2024-07-15T09:53:10.262422387Z              at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/scheduler/multi_thread/worker.rs:478:5
  16: tokio::runtime::scheduler::multi_thread::worker::Launch::launch::{{closure}}
2024-07-15T09:53:10.262441369Z              at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/scheduler/multi_thread/worker.rs:447:45
      <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
2024-07-15T09:53:10.262458585Z              at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/blocking/task.rs:42:21
2024-07-15T09:53:10.262467055Z       tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/core.rs:311:17
2024-07-15T09:53:10.262485995Z       tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/loom/std/unsafe_cell.rs:14:9
2024-07-15T09:53:10.262504065Z       tokio::runtime::task::core::Core<T,S>::poll
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/core.rs:300:30
2024-07-15T09:53:10.262523081Z       tokio::runtime::task::harness::poll_future::{{closure}}
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/harness.rs:476:19
      <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
2024-07-15T09:53:10.262560526Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/core/src/panic/unwind_safe.rs:272:9
      std::panicking::try::do_call
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/panicking.rs:554:40
2024-07-15T09:53:10.262583184Z       std::panicking::try
2024-07-15T09:53:10.262588688Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/panicking.rs:518:19
      std::panic::catch_unwind
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/panic.rs:142:14
      tokio::runtime::task::harness::poll_future
2024-07-15T09:53:10.262611593Z              at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/harness.rs:464:18
2024-07-15T09:53:10.262617665Z       tokio::runtime::task::harness::Harness<T,S>::poll_inner
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/harness.rs:198:27
      tokio::runtime::task::harness::Harness<T,S>::poll
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/harness.rs:152:15
2024-07-15T09:53:10.262641383Z       tokio::runtime::task::raw::poll
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/raw.rs:276:5
2024-07-15T09:53:10.262653428Z   17: tokio::runtime::task::raw::RawTask::poll
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/raw.rs:200:18
2024-07-15T09:53:10.262664997Z       tokio::runtime::task::UnownedTask<S>::run
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/task/mod.rs:437:9
2024-07-15T09:53:10.262676809Z       tokio::runtime::blocking::pool::Task::run
             at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/blocking/pool.rs:159:9
      tokio::runtime::blocking::pool::Inner::run
2024-07-15T09:53:10.262694973Z              at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/blocking/pool.rs:513:17
      tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}
2024-07-15T09:53:10.262706902Z              at usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.29.1/src/runtime/blocking/pool.rs:471:13
      std::sys_common::backtrace::__rust_begin_short_backtrace
2024-07-15T09:53:10.262722948Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/sys_common/backtrace.rs:155:18
  18: std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}}
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/thread/mod.rs:529:17
      <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/core/src/panic/unwind_safe.rs:272:9
2024-07-15T09:53:10.262752844Z       std::panicking::try::do_call
2024-07-15T09:53:10.262761324Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/panicking.rs:554:40
2024-07-15T09:53:10.262771182Z       std::panicking::try
2024-07-15T09:53:10.262779980Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/panicking.rs:518:19
2024-07-15T09:53:10.262788784Z       std::panic::catch_unwind
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/panic.rs:142:14
2024-07-15T09:53:10.262809251Z       std::thread::Builder::spawn_unchecked_::{{closure}}
2024-07-15T09:53:10.262818841Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/thread/mod.rs:528:30
      core::ops::function::FnOnce::call_once{{vtable.shim}}
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/core/src/ops/function.rs:250:5
  19: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
2024-07-15T09:53:10.262857926Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/alloc/src/boxed.rs:2015:9
2024-07-15T09:53:10.262873992Z       <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
             at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/alloc/src/boxed.rs:2015:9
      std::sys::pal::unix::thread::Thread::new::thread_start
2024-07-15T09:53:10.262913882Z              at rustc/6b4f1c5e782c72a047a23e922decd33e7d462345/library/std/src/sys/pal/unix/thread.rs:108:17
2024-07-15T09:53:10.262923813Z   20: start_thread
2024-07-15T09:53:10.262935140Z              at build/glibc-LcI20x/glibc-2.31/nptl/pthread_create.c:477:8
2024-07-15T09:53:10.262945997Z   21: clone
             at build/glibc-LcI20x/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95
2024-07-15T09:53:10.262963620Z (B
@linanh linanh added the bug Something isn't working label Jul 15, 2024
@Rachelint
Copy link
Contributor

Thanks for reply, debugging about it.

chunshao90 added a commit that referenced this issue Aug 2, 2024
… WAL (#1550)

## Rationale
Fix the issue of sequence overflow when dropping a table using a message
queue as WAL.
close #1543 

## Detailed Changes
Check the maximum value of sequence to prevent overflow.

## Test Plan
CI.
zealchen pushed a commit to zealchen/incubator-horaedb that referenced this issue Aug 8, 2024
… WAL (apache#1550)

## Rationale
Fix the issue of sequence overflow when dropping a table using a message
queue as WAL.
close apache#1543 

## Detailed Changes
Check the maximum value of sequence to prevent overflow.

## Test Plan
CI.
@chunshao90
Copy link
Contributor

#1550 has fixed this bug, you can recheck it. @linanh

LeslieKid pushed a commit to LeslieKid/horaedb that referenced this issue Sep 25, 2024
… WAL (apache#1550)

## Rationale
Fix the issue of sequence overflow when dropping a table using a message
queue as WAL.
close apache#1543 

## Detailed Changes
Check the maximum value of sequence to prevent overflow.

## Test Plan
CI.
LeslieKid added a commit to LeslieKid/horaedb that referenced this issue Sep 27, 2024
refactor: partitioned_lock's elaboration (apache#1540)

Extended the `try_new` interface while keeping the old one for
compatibility.

* Implemented the `try_new_suggest_cap` method, while changing the old
`try_new` method to `try_new_bit_len` to ensure compatibility.
* Modified structs and functions that call old interfaces.

* Added new unit tests
* Passed CI test

---------

Co-authored-by: chunhao.ch <[email protected]>

feat: support INSERT INTO SELECT (apache#1536)

Close  apache#557.

When generating the insert logical plan, alse generate the select logical plan and store it in the insert plan. Then execute the select logical plan in the insert interpreter, convert the result records into RowGroup and then insert it.

CI

refactor: insert select to stream mode (apache#1544)

Close apache#1542

Do select and insert procedure in stream way.

CI test.

---------

Co-authored-by: jiacai2050 <[email protected]>

fix(comment): update error documentation comment for remote engine service (apache#1548)

Updating an error comment in the code to reflect the correct service
name is needed.

No need

refactor: manifest error code (apache#1546)

fix: sequence overflow when dropping a table using a message queue as WAL (apache#1550)

Fix the issue of sequence overflow when dropping a table using a message
queue as WAL.
close apache#1543

Check the maximum value of sequence to prevent overflow.

CI.

feat: Add a new disk-based WAL implementation for standalone deployment (apache#1552)

1. Added a struct `Segment` responsible for reading and writing segment
files, and it records the offset of each record.
2. Add a struct SegmentManager responsible for managing all segments,
including:
	1.	Reading all segments from the folder upon creation.
	2.	Writing only to the segment with the largest ID.
3. Maintaining a cache where segments not in the cache are closed, while
segments in the cache have their files open and are memory-mapped using
mmap.
3. Implement the `WalManager` trait.

Unit tests.

chore: upgrade object store version (apache#1541)

The object store version is upgraded to 0.10.1 to prepare for access to
opendal

- Impl AsyncWrite for ObjectStoreMultiUpload
- Impl MultipartUpload for ObkvMultiPartUpload
- Adapt new api on query writing path

- Existing tests

---------

Co-authored-by: jiacai2050 <[email protected]>

feat: use opendal to access  underlying storage (apache#1557)

Use opendal to access the object store, thus unifying the access method
of the underlying storage.

- use opendal to access s3/oss/local file

- Existed tests

feat: add metric engine rfc (apache#1558)

RFC for next metric engine.

No need.

chore: update link (apache#1561)

I noticed that the previous repository has been archived, maybe it would
be better to update the new link

chore(horaemeta): add building docs (apache#1562)

feat: Implementing cross-segment read/write for WAL based on local disk (apache#1556)

Improving WAL based on local disk.

This is a follow-up task for apache#1552.

1. Make MAX_FILE_SIZE configurable.
2. Allocate enough space when creating a segment to avoid remapping when
appending to the segment.​
3. Add `MultiSegmentLogIterator` to enable cross-segment reading.
4. When writing, if the current segment has insufficient space, create a
new segment and write to the new segment.​

Unit test.

chore: fix doc links (apache#1565)

fix: disable layered memtable in overwrite mode (apache#1533)

Layered memtable is only designed for append mode table now, and it
shouldn't be used in overwrite mode table.

- Make default values in config used.
- Add `enable` field to control layered memtable's on/off.
- Add check to prevent invalid options during table create/alter.
- Add related it cases.

Test manually.

Following cases are considered:

Check and intercept the invalid table options during table create/alter
- enable layered memtable but mutable switch threshold is 0
- enable layered memtable for overwrite mode table

Table options new field `layered_enable`'s default value when it is not
found in pb
- false, when whole `layered_memtable_options` not exist
- false, when `layered_memtable_options` exist, and
`mutable_segment_switch_threshold` == 0
- true, when `layered_memtable_options` exist, and
`mutable_segment_switch_threshold` > 0

feat: init metric engine structure (apache#1554)

See apache#1558

Add a new sub directory `horaedb`, all source codes for metric engine
are under it.

Add a new ci.

feat: Implement delete operation for WAL based on local storage (apache#1566)

Currently the WAL based on the local disk does not support the delete
function. This PR implements that functionality.

This is a follow-up task of apache#1552 and apache#1556.

1. For each `Segment`, add a hashmap to record the minimum and maximum
sequence numbers of all tables within that segment. During `delete` and
`write` operations, this hashmap will be updated. During read
operations, logs will be filtered based on this hashmap.

2. During the `delete` operation, based on the aforementioned hashmap,
if all logs of all tables in a read-only segment (a segment that is not
currently being written to) are marked as deleted, the segment file will
be physically deleted from the disk.

Unit test, TSBS and running a script locally that repeatedly inserts
data, forcibly kills, and restarts the database process to test
persistence.

fix: support to compat the old layered memtable options (apache#1568)

We introduce the explicit flag to control should we enable layered
memtable, but it has some compatibility problem when upgrading from old
version.
This pr add an option to support compating the old layered memtable
on/off control method.

Add an option to support compating the old layered memtable on/off
control method.

Manually.

chore: record replay cost in log (apache#1569)

1. Add replay cost in log
2. Remove verbose http log
3. Recover default to shard based, which is faster in most wal
implementation.

fix: logs might be missed during RegionBased replay in the WAL based on local disk (apache#1570)

In RegionBased replay, a batch of logs is first scanned from the WAL,
and then replayed on various tables using multiple threads. This
approach works fine for WALs based on tables, as the logs for each table
are clustered together. However, in a WAL based on local disk, the logs
for each table may be scattered across different positions within the
batch. During multi-threaded replay, it is possible that for a given
table, log2 is replayed before log1, resulting in missed logs.

1. Modify `split_log_batch_by_table` function to aggregate all logs for
a table together.
2. Modify `tableBatch` struct to change a single range into a
`Vec<Range>`.

Manual testing.

fix format.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants