Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat!: data commitment ranges catchup #1763

Merged
merged 13 commits into from
May 17, 2023

Conversation

rach-id
Copy link
Member

@rach-id rach-id commented May 14, 2023

Overview

Closes #1719

@rach-id rach-id added the x/qgb label May 14, 2023
@rach-id rach-id requested a review from evan-forbes as a code owner May 14, 2023 11:37
@rach-id rach-id self-assigned this May 14, 2023
@rach-id rach-id changed the title feat: data commitment ranges catchup feat!: data commitment ranges catchup May 14, 2023
@MSevey MSevey requested review from a team and MSevey and removed request for a team May 14, 2023 11:37
@codecov-commenter
Copy link

Codecov Report

Merging #1763 (4c44ebe) into main (1b5263a) will decrease coverage by 0.19%.
The diff coverage is 31.64%.

@@            Coverage Diff             @@
##             main    #1763      +/-   ##
==========================================
- Coverage   52.05%   51.86%   -0.19%     
==========================================
  Files          93       93              
  Lines        6049     6125      +76     
==========================================
+ Hits         3149     3177      +28     
- Misses       2576     2618      +42     
- Partials      324      330       +6     
Impacted Files Coverage Δ
app/process_proposal.go 0.00% <0.00%> (ø)
x/qgb/keeper/query_data_commitment.go 0.00% <0.00%> (ø)
x/qgb/keeper/keeper_data_commitment.go 41.58% <3.57%> (-11.59%) ⬇️
pkg/square/builder.go 78.02% <12.50%> (-4.08%) ⬇️
x/qgb/abci.go 61.11% <84.61%> (+10.09%) ⬆️

... and 1 file with indirect coverage changes

@MSevey MSevey requested a review from a team May 14, 2023 13:06
rootulp
rootulp previously approved these changes May 14, 2023
x/qgb/abci.go Outdated Show resolved Hide resolved
x/qgb/abci_test.go Outdated Show resolved Hide resolved
Comment on lines +401 to +405
// we should have 19 data commitments created in the above setup
// - window 400: [1, 400], [401, 800], [801, 1200]
// - window 100: [1201, 1300], [1301, 1400], [1401, 1500], [1501, 1600], [1601, 1700], [1701, 1800], [1801, 1900]
// - window 1000: [1901, 2900]
// - window 111: [2901, 3011], [3012, 3122], [3123,3233], [3234, 3344], [3345, 3455], [3456, 3566], [3567, 3677], [3678, 3788]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[optional] this comment is soooo helpful so I converted it into an explicit want. Totally optional:

// TestGetDataCommitment This test will test the data commitment creation catchup mechanism.
// It will run `abci.EndBlocker` on all the heights while changing the data commitment window
// in different occasions, to see if at the end of the test, the data commitments cover all
// the needed ranges.
func TestDataCommitmentCreationCatchup(t *testing.T) {
	input, ctx := testutil.SetupFiveValChain(t)
	qk := input.QgbKeeper
	ctx = ctx.WithBlockHeight(1)

	executeHeights := func(beginHeight int64, endHeight int64) {
		for i := beginHeight; i <= endHeight; i++ {
			ctx = ctx.WithBlockHeight(i)
			qgb.EndBlocker(ctx, *qk)
		}
	}

	// from height 1 to 1500 with a window of 400
	qk.SetParams(ctx, types.Params{DataCommitmentWindow: 400})
	executeHeights(1, 1500)

	// change window to 100 and execute up to 1920
	qk.SetParams(ctx, types.Params{DataCommitmentWindow: 100})
	executeHeights(1501, 1920)

	// change window to 1000 and execute up to 3500
	qk.SetParams(ctx, types.Params{DataCommitmentWindow: 1000})
	executeHeights(1921, 3500)

	// change window to 111 and execute up to 3800
	qk.SetParams(ctx, types.Params{DataCommitmentWindow: 111})
	executeHeights(3501, 3800)

	// check if a data commitment was created
	hasDataCommitment, err := qk.HasDataCommitmentInStore(ctx)
	require.NoError(t, err)
	require.True(t, hasDataCommitment)

	// get the last attestation nonce
	lastAttestationNonce := qk.GetLatestAttestationNonce(ctx)

	got := []types.DataCommitment{}
	for i := uint64(1); i <= lastAttestationNonce; i++ {
		att, found, err := qk.GetAttestationByNonce(ctx, i)
		require.NoError(t, err)
		require.True(t, found)
		dc, ok := att.(*types.DataCommitment)
		if !ok {
			continue
		}
		got = append(got, *dc)
	}

	// We should have 19 data commitments created in the above setup
	// - window 400: [1, 400], [401, 800], [801, 1200]
	// - window 100: [1201, 1300], [1301, 1400], [1401, 1500], [1501, 1600], [1601, 1700], [1701, 1800], [1801, 1900]
	// - window 1000: [1901, 2900]
	// - window 111: [2901, 3011], [3012, 3122], [3123,3233], [3234, 3344], [3345, 3455], [3456, 3566], [3567, 3677], [3678, 3788]
	want := []types.DataCommitment{
		{
			Nonce:      2, // nonce 1 is the valset attestation
			BeginBlock: 1,
			EndBlock:   400,
		},
		{
			Nonce:      3,
			BeginBlock: 401,
			EndBlock:   800,
		},
		{
			Nonce:      4,
			BeginBlock: 801,
			EndBlock:   1200,
		},
		{
			Nonce:      5,
			BeginBlock: 1201,
			EndBlock:   1300,
		},
		{
			Nonce:      6,
			BeginBlock: 1301,
			EndBlock:   1400,
		},
		{
			Nonce:      7,
			BeginBlock: 1401,
			EndBlock:   1500,
		},
		{
			Nonce:      8,
			BeginBlock: 1501,
			EndBlock:   1600,
		},
		{
			Nonce:      9,
			BeginBlock: 1601,
			EndBlock:   1700,
		},
		{
			Nonce:      10,
			BeginBlock: 1701,
			EndBlock:   1800,
		},
		{
			Nonce:      11,
			BeginBlock: 1801,
			EndBlock:   1900,
		},
		{
			Nonce:      12,
			BeginBlock: 1901,
			EndBlock:   2900,
		},
		{
			Nonce:      13,
			BeginBlock: 2901,
			EndBlock:   3011,
		},
		{
			Nonce:      14,
			BeginBlock: 3012,
			EndBlock:   3122,
		},
		{
			Nonce:      15,
			BeginBlock: 3123,
			EndBlock:   3233,
		},
		{
			Nonce:      16,
			BeginBlock: 3234,
			EndBlock:   3344,
		},
		{
			Nonce:      17,
			BeginBlock: 3345,
			EndBlock:   3455,
		},
		{
			Nonce:      18,
			BeginBlock: 3456,
			EndBlock:   3566,
		},
		{
			Nonce:      19,
			BeginBlock: 3567,
			EndBlock:   3677,
		},
		{
			Nonce:      20,
			BeginBlock: 3678,
			EndBlock:   3788,
		},
	}
	assert.Equal(t, 19, len(got))
	assert.Equal(t, want, got)
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

x/qgb/keeper/keeper_data_commitment.go Outdated Show resolved Hide resolved
rach-id and others added 2 commits May 14, 2023 21:38
@MSevey MSevey requested a review from a team May 14, 2023 19:39
cmwaters
cmwaters previously approved these changes May 15, 2023
Copy link
Contributor

@cmwaters cmwaters left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work!

Just a few minor comments

x/qgb/abci.go Outdated Show resolved Hide resolved
Comment on lines 17 to +21
func EndBlocker(ctx sdk.Context, k keeper.Keeper) {
handleDataCommitmentRequest(ctx, k)
// we always want to create the valset at first so that if there is a new validator set, then it is
// the one responsible for signing from now on.
handleValsetRequest(ctx, k)
handleDataCommitmentRequest(ctx, k)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain to me a little more the rationale for switching this? Is one dependent on the other?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is done mainly in the case of a hardfork where there is a manual change to the validator set. This would ensure that straight after starting the chain again, a new valset is created representing the new validator set, and that one will be used to sign the new data commitments.

Otherwise, if we're in a case where a new data commitment is created straight after the hardfork, we will be in the situation where that data commitment will need to be signed by the previous validator set which can not exist anymore.

So, it's better to always check if we there is a change to the validator set and account for it, before creating any data commitment

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the validator set of that height signs over the data hashes for the last n heights? (as opposed to the validator set at the previous height)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, in the event of a validator set change, we want to change directly to the new one and use it for signatures

x/qgb/abci_test.go Show resolved Hide resolved
Co-authored-by: Callum Waters <[email protected]>
@MSevey MSevey requested a review from a team May 15, 2023 08:33
@rach-id rach-id requested review from cmwaters and rootulp May 15, 2023 15:07
Copy link
Member

@evan-forbes evan-forbes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍 left a comment about adding some more comments and one potential future change for an very slight optimization

Comment on lines +35 to +62
// this will keep executing until all the needed data commitments are created and we catchup to the current height
for {
hasLastDataCommitment, err := k.HasDataCommitmentInStore(ctx)
if err != nil {
panic(err)
}
if hasLastDataCommitment {
// if the store already has a data commitment, we use it to check if we need to create a new data commitment
lastDataCommitment, err := k.GetLastDataCommitment(ctx)
if err != nil {
panic(err)
}
if ctx.BlockHeight()-int64(lastDataCommitment.EndBlock) >= int64(k.GetDataCommitmentWindowParam(ctx)) {
setDataCommitmentAttestation()
} else {
// the needed data commitments are already created and we need to wait for the next window to elapse
break
}
} else {
// if the store doesn't have a data commitment, we check if the window has passed to create a new data commitment
if ctx.BlockHeight() >= int64(k.GetDataCommitmentWindowParam(ctx)) {
setDataCommitmentAttestation()
} else {
// the first data commitment window hasn't elapsed yet to create a commitment
break
}
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice thanks for adding the purpose of the loop at the top and the comments at the break points! we might also want to add some docs at the top for the two conditions in which this loop breaks,

afaiu:

  1. If hasLastDataCommitment == true, and blocks last data commitment is < the data commitment window.

  2. If hasLastDataCommitment == false, and height < the data commitment window.

The first condition is often the one that gets hit, so its pretty easy to find in the tests, but do you mind adding a quick comment on line 371 where we're testing the loop functionality? Also I think I missed it but are we testing the second exit condition?

I think there's a very minor gas optimization where we could load the data commitment window once instead of reading from state each loop. I know we're not the ones paying gas since its in endblock, and that shouldn't be breaking, so tbh its probably not even worth changing until we actually test the diff.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but do you mind adding a quick comment on line 371 where we're testing the loop functionality?

I guess the line number is incorrect

Also I think I missed it but are we testing the second exit condition?
I think it is tested implicitly by specifying the nonce. This means that no data commitment was created in that range

I think there's a very minor gas optimization where we could load the data commitment window once instead of reading from state each loop. I know we're not the ones paying gas since its in endblock, and that shouldn't be breaking, so tbh its probably not even worth changing until we actually test the diff.

I will create a separate issue for this. But, who pays for end blocker? There are no transactions for the QGB module... So you mean the validators will pay for it?

Copy link
Member

@evan-forbes evan-forbes May 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But, who pays for end blocker?

no sorry, I meant that since it's requried to come to consensus and its not a tx with fees, no one is paying. Technically tho, we are reading from the state fewer times, so we're saving "gas" or simply node/network resources 🙂

I guess the line number is incorrect

do you mind pointing our where in the test we are hitting that looping functionlity then? that still looks that the point in the test afaict 🙂

https://github.com/SweeXordious/celestia-app/blob/3fc91f06187d2041d6fc6208fccecb3d5d2c95b1/x/qgb/abci_test.go#L371-L373

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so we're saving "gas" or simply node/network resources slightly_smiling_face

I think that that simple statements like those will not affect it much. But yes, good point, will open an issue to optimize that.

For the test thing, the catchup should start here, but good point we're not testing all the possibilities. I will open an issue to do that.

I don't want to commit anymore to this PR not to wait for another round of reviews.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense, lets :shipit:

Copy link
Member

@evan-forbes evan-forbes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍 left a comment about adding some more comments and one potential future change for an very slight optimization

@MSevey MSevey requested a review from a team May 15, 2023 19:18
@rach-id rach-id merged commit 5fc83cf into celestiaorg:main May 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Data commitment ranges catchup
5 participants