-
Notifications
You must be signed in to change notification settings - Fork 990
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: add test to test big collections or collections with big values #3959
Conversation
57b08d1
to
f2783cf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd add an opt_only case with values at least a few dozens of mbs
data_size=4000000, | ||
collection_size=1000, | ||
variance=100, | ||
samples=1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
samples is into how many sets we divide key_target to apply variance, maybe use > 1 to have mixed-size values
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to have only one size, big or small or average maybe we can add one more test later to cover other cases
|
||
@pytest.mark.asyncio | ||
@pytest.mark.slow | ||
async def test_big_containers(df_factory): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe that this test should be one test case for test_replication_all in which you play different config params
seeder = StaticSeeder( | ||
key_target=20, | ||
data_size=4000000, | ||
collection_size=1000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- one test case for big collection with more than kMaxBlobLen = 4092 elements , to check the flow in rdb load for big containers, the size of elemets should be small in this test , testing Support huge values in RdbLoader #3760
- second test case for big values of size ~10K, we can have 2 or 3 items in the collection in this test , testing feat(server): use listpack node encoding for list #3914
fixes: #3931
In this test, I simulate replication for quite big containers with different numbers of keys and different key numbers