-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Liquidity Pools v2 #1909
WIP: Liquidity Pools v2 #1909
Conversation
* wip: add new msgs + reorder * refactor: apply renaming * fix: ITs * feat: UpdateRestriction message * fix: cancel_unprocessed_investment IT * fix: fmt * fix: clippy * tests: reanchor solidity to reorder branch * feat: apply hook to AddTranche msg * tests: fix pool_management ITs * wip: fix lp investments * fmt * fix: Tranche namespace * refactor: remove fulfilled from FulfilledRedeemRequest * chore: update lp submodule * minor cleanup * chore: update lp submodule * chore: add missing cleanup * fixes + ignore failing tests * fmt * tests: fix message indices
* add uts (#1905) * main changes for FI * fix correlation precission * minor renames * update investment UTs * update redemption UTs * miscelaneous UTs * minor renames * simplify correlation and embed to the only needed place * doc change * remove unused bound * swapping calls into entities.rs file * merge SwapDone methods into FulfilledSwapHook * fix events * working without pallet-swaps * remove boilerplate for removing entries * minor msg change * minor simplification * correct fulfilled sum after last collect * check OrderIdToSwapId storage * sending the message inside Info closure is not really a problem * send msgs from the entities * remove same currency check in impl.rs * unify hooks * remove pallet-swaps * minor fmt * add docs * add architecture diagram * return cancelled foreign amount from FI interface * update liquidity-pools * add messages to diagram * implement hooks * fix runtimes * adapt integration-tests * fix docs * fix clippy
Modified this PR to ready to review because we want to be sure this branch has always CI in green |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1909 +/- ##
==========================================
- Coverage 47.92% 47.81% -0.12%
==========================================
Files 181 184 +3
Lines 13159 13036 -123
==========================================
- Hits 6307 6233 -74
+ Misses 6852 6803 -49 ☔ View full report in Codecov by Sentry. |
No worries at all! It's more. I think the two tests from I will try to get this green, and then in your PR, fix whatever remains |
* chore: update submodule to latest `main` 6d7f242c0dd83b1b5a4f6d506370a1f3ecbef9ce * wip: fix ITs * chore: update submodule * fix: remove sender param from `Transfer*` messages * chore: cleanup * docs: setup evm * fix: msg tests * fix: more ITs * fix: missing refactor after rebase * chore: update submodule to 223a0f36edabc675f8c74c47b20e366178df7ca3 * chore: improvements * fmt * Apply suggestions from code review * chore: bump spec_version * fmt: taplo
* custo serialize/deserialize for batch * compiling solution * serialization/deserialization working for batch message * remove gmpf changes * add batch nested protection * faster serialization for submessages * correct termination * add tests for deserialize batch of batch * add into_iter * remove unused Box and add pack methods * fix non_std compilation * non_std * fix max_encoded_len recursion issue * fix submodules * reduce batch limit
* refactor: add domain hook address storage * tests: add gateway lp admin account checks * refactor: use GetByKey * fix: outboundq mock * refactor: hook 20 bytes instead of 32
I think the conflicts are due to #1926 will and I'll fix on Monday when I'm back |
Rebased! CI should pass 🤞🏻 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done my self-review. Some comments/remainders below
pallets/liquidity-pools/src/tests.rs
Outdated
Permissions::mock_has(|_, _, _| true); | ||
Pools::mock_pool_exists(|_| true); | ||
Pools::mock_tranche_exists(|_, _| true); | ||
AssetRegistry::mock_metadata(|_| Some(util::default_metadata())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT (I think it comes from a merge where does not applied the config_mocks() call)
Permissions::mock_has(|_, _, _| true); | |
Pools::mock_pool_exists(|_| true); | |
Pools::mock_tranche_exists(|_, _| true); | |
AssetRegistry::mock_metadata(|_| Some(util::default_metadata())); | |
config_mocks(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In config_mocks()
, we mock a hook address for the AddTranche
message. This test requires this hook to not have been set. If we execute config_mocks
, then we need to revert https://github.com/centrifuge/centrifuge-chain/pull/1909/files/5a63633955ba13282a2448dd2a15262e5173c9ba#diff-57577a9a8bc154a3e3b81af63e11d2b863c6716120411b1480117c8c44ee02d5R446-R452
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea is to overwrite the mock make it fail, so:
config_mocks();
Gateway::mock_get(|_| None);
make the tests work as expected.
The idea is not to copy-past all initialization for all these tests and not create an order in how the mocks are defined. I did that refactor, but I think this test was added later and was skipped by me in the merge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add this to my not yet published cleanup PR! Sorry for not following the status-quo here 🥲
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed it in a commit after for your work on this, so you indeed followed the status-quo!!
* pallets: Add LP Gateway queue pallet * lp-gateway-queue: Add benchmarks * integration-tests: Add LP gateway tests * docs: Update LP gateway queue entry * lp-gateway-queue: Use default weight for PostDispatchInfo * lp-gateway-queue: Add DEFAULT_WEIGHT_REF_TIME const, extract message processing logic, use BaseArithmetic for nonce * runtime: Add defensive weights for LP gateway queue * lint: Obey * taplo: Obey * pallet: Use DispatchResult for extrinsics * runtime: Update benchmarks and weight info * benchmarks: Add default for Message type (wip) * pallet: Add Default bound to Message type
BTW @wischli I see 6 tests ignored. Did you say 5 in the sync? |
* fix: add remark call to borrow proxy * fix: add missing message fields * chore: bump to v0.13.3 * chore: update submodule * chore: enable fixed tests --------- Co-authored-by: Frederik Gartenmeister <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks super clean except for the leftovers from me which @lemunozm already pointed out. Working on a cleanup PR which will also fix the last failing test transfer_tranche_tokens_domain_to_local_to_domain
.
#[benchmark] | ||
fn process_message() -> Result<(), BenchmarkError> { | ||
let caller: T::AccountId = account("acc_0", 0, 0); | ||
let message = T::Message::default(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if this was raised before but eventually we need to benchmark the heaviest message such that the benchmark covers the worst case weight of processing messages. For now, this is fine and a good improvement over not having any benchmarking at all!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That will be tricky since the message itself is irrelevant given that it gets passed to a mock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Open to suggestions nevertheless, there's plenty of room for improvement here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it gets passed to a mock
Note that when the benchmark is executed for the real runtimes, the mock is no longer used.
@wischli we already talked about adding a PoV for the MaxEncodedLen to the resulting weights
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that when the benchmark is executed for the real runtimes, the mock is no longer used.
Oh, that explains a lot then. Where does this magical switch from mock runtime to actual runtime happen?
Yes, we should definitely add that there before merging this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The magic happens here: https://github.com/centrifuge/centrifuge-chain/blob/main/runtime/centrifuge/src/lib.rs#L2775
That macro calls the benchmarks of those pallets using the real runtime instead of the mock. That's why sometimes the benchmarking is so difficult to achieve. You should be able to create a benchmark that works fine for any runtime configuration.
Right now, the maximum message is no more than 256 bytes. So we're "safe", maybe just adding to the extrinsic is enough:
#[pallet::weight(T::WeightInfo::process_message().saturating_add(Weights::from_parts(0, T::Message::max_encoded_len())))]
I would not go with a huge change because the batch process will need some special treatment regarding this. I'll do it in the batch PR once I rebase over this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the pointer, I'll expand that macro and take a look.
As for the process_message
weight, with the changes that I'm currently working on, this will be a bit easier since that extrinsic will now submit the message to the queue. It will be then processed on_idle
and return a DispatchResultWithPostInfo
that can have more granular weights, as per this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you're able to return directly the weight there, then I would say we're fine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW (NIT), if you're only interested in the weight
, maybe it's interesting to return a tuple (DispatchResult, Weight)
instead. That way, you can get rid of the unused pay_fees
, and you have the weight
for both successful processing and failures. I was experimenting with that in the batch PR; maybe it's useful in this part too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome idea, it should make our lives easier.
// Extra weight breakdown: | ||
// | ||
// 1 read for the message | ||
// 1 write for the event | ||
// 1 write for the message removal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love this! On a note, dispatching events does not count as a write but we can keep the weight increment here.
/// Storage for messages that will be processed during the `on_idle` hook. | ||
#[pallet::storage] | ||
#[pallet::getter(fn message_queue)] | ||
pub type MessageQueue<T: Config> = StorageMap<_, Blake2_128Concat, T::MessageNonce, T::Message>; | ||
|
||
/// Storage for messages that failed during processing. | ||
#[pallet::storage] | ||
#[pallet::getter(fn failed_message_queue)] | ||
pub type FailedMessageQueue<T: Config> = | ||
StorageMap<_, Blake2_128Concat, T::MessageNonce, (T::Message, DispatchError)>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering whether we need another storage which maps a message (hash) to its nonce? Has this been discussed already?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @cdamian
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the 3rd part of the multi-router work has logic that involves hashes, see a potential solution here.
pallets/liquidity-pools/src/tests.rs
Outdated
Permissions::mock_has(|_, _, _| true); | ||
Pools::mock_pool_exists(|_| true); | ||
Pools::mock_tranche_exists(|_, _| true); | ||
AssetRegistry::mock_metadata(|_| Some(util::default_metadata())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add this to my not yet published cleanup PR! Sorry for not following the status-quo here 🥲
}; | ||
|
||
#[test_runtimes(all)] | ||
fn submit_and_process<T: Runtime + FudgeSupport>() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curios question for @cdamian: Are there any other scenarios you have in mind for ITs or does this suffice for now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now this suffices as the LP gateway doesn't actually process anything. In the 2nd part that I'm tackling right now I will extend this whole suite.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my experience, testing the biggest happy path in IT is enough. Anything else can be tested with UTs in a much simpler way.
* increase version for foreign investments * fix investment issue * fix solidity call for transfers_tokens * fix tests * minimize required tranche investors for IT * fix docs * fix docs * docs: fix hyperlink * refactor: enable ITs for centrifuge + dev runtimes (#1938) * fix: enable ITs for centrifuge + dev runtimes * fmt * fix: revert some centrifuge ITs * fmt * revert centrifuge addition to ITs --------- Co-authored-by: William Freudenberger <[email protected]>
Finally, this is green! I think we can just rebase and merge |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving our great teamwork here! Once CI is green, we can merge this!
Description
LP-V2 base branch where we've merged the individual PRs. For more information, check the individual PRs merged in this branch