Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: add test to test big collections or collections with big values #3959

Merged
merged 1 commit into from
Oct 22, 2024

Conversation

BorysTheDev
Copy link
Contributor

fixes: #3931
In this test, I simulate replication for quite big containers with different numbers of keys and different key numbers

Copy link
Contributor

@dranikpg dranikpg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd add an opt_only case with values at least a few dozens of mbs

data_size=4000000,
collection_size=1000,
variance=100,
samples=1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

samples is into how many sets we divide key_target to apply variance, maybe use > 1 to have mixed-size values

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to have only one size, big or small or average maybe we can add one more test later to cover other cases

@BorysTheDev BorysTheDev merged commit dec0712 into main Oct 22, 2024
12 checks passed
@BorysTheDev BorysTheDev deleted the test_big_containers branch October 22, 2024 12:01

@pytest.mark.asyncio
@pytest.mark.slow
async def test_big_containers(df_factory):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that this test should be one test case for test_replication_all in which you play different config params

seeder = StaticSeeder(
key_target=20,
data_size=4000000,
collection_size=1000,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. one test case for big collection with more than kMaxBlobLen = 4092 elements , to check the flow in rdb load for big containers, the size of elemets should be small in this test , testing Support huge values in RdbLoader #3760
  2. second test case for big values of size ~10K, we can have 2 or 3 items in the collection in this test , testing feat(server): use listpack node encoding for list #3914

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Replication tests for different sizes of container and values
3 participants