-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test delegation graphs #1689
Test delegation graphs #1689
Conversation
Pull Request Test Coverage Report for Build 1582213264Warning: This coverage report may be inaccurate.This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Details
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty cool, probably a good way to go. It's definitely an improvement over anything that currently exists.
The test data definition is quite chunky, 5-10 lines per subtest, but on the other hand it is readable (at least if test names are better or a comment is added)... and I also can't think of better alternatives: your method allows testing cyclic graphs etc which a dict would have trouble doing.
I would like to be able to run "--dump" to see the actual client side metadata to verify what it looks like. It's not 100% trivial and might look a bit goofy with subtests but can you try to enable that? There's an example in test_updater_key_rotations -- the trick with subtests is that you need to use a new dump directory for each subtest and the directory names are then not great.
All of the comments are really just suggestions: this looks good to me.
keyids: Optional[List[str]] = None, | ||
threshold: int = 1, | ||
terminating: bool = False, | ||
paths: Optional[List[str]] = ["*"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the default should be none, and then you can special case that
if paths is None and path_hash_prefixes is None:
paths = ["*"]
otherwise you get bugs when someone actually adds path_hash_prefixes tests
@dataclass | ||
class Delegation: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's probably not a dataclass if it has a constructor.
the name should maybe change... this is a bit confusing wrt Metadata API
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually the name isn't that bad: I thought it overrode the Metadata API class but that is Delegations... an alternative non-confusing name would be better but the current one works fine.
"3-level tree terminating": { | ||
"delegations": [ | ||
Delegation("targets", "A"), | ||
Delegation("targets", "B"), | ||
Delegation("A", "C", terminating=True), | ||
Delegation("C", "D"), | ||
], | ||
"visited roles": ["A", "C", "D"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these could have a comment pointing out the interesting bit (if it does not fit into the subtest name) -- like I think this is testing that B is not visited because C terminates but I think spelling it out while you still remember could be good...
Alternatively maybe the comment could be the test name?
"single delegation": { | ||
"delegations": [Delegation("targets", "A")], | ||
"visited roles": ["A"], | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only tweak I could think of to the test data definition is to define a dataclass for the subtest data as well...
"single_delegation": DelegationTest(
delegations = [Delegation("targets", "A")],
expect_visited = ["A"]
)
but it is just as verbose so not a very useful suggestion. It does get rid of using strings when attributes can work, and would make the types visible in the test code: so you could just use the automatically annotated data.delegations
instead having to do delegations: List[Delegation] = data["delegations"]
I'm sure there's loads of tests to be written but at the very least we will want to add tests that actually find the target ... that might require modifying the test data structure as well but starting with this and then adding tests in other PRs sounds good to me |
85dcbc8
to
dc908ee
Compare
The main improvements are:
I did modifications at several places so at the end I thought a whole re-review would be better than new commits with the changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these are cool and something this project has never had. With merge conflict fixed LGTM.
Your sub test cleanup commit implies we should have a proper solution for that? The subtest model seems to work really well for our use cases but the manual cleanup is really not optimal...
These tests only cover the "no search results case" but I suppose the same setup (maybe even same test data?) can be used for the positive case in follow up PRs?
# unpack 'd' but skip "delegator" | ||
role = DelegatedRole(*astuple(d)[1:]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh wow. mypy is going to have an aneurysm I bet :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it was paralysed by the shock because it stayed silent when I tried 😁
with patch.object( | ||
sim, "_fetch_metadata", wraps=sim._fetch_metadata | ||
) as wrapped_fetch: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once the fetcher statistics PR is in we should modify this to use them ... but it's not in quite yet
I agree, I will open an issue for improving it, I haven't figured out a clever solution for now.
The follow up PR: #1711 |
I guess it would have to integrate with the subtest runner somehow: maybe we could make I guess the adult way of solving this is to use some test library that does this for us 🤷 but where's the fun in that? |
dc908ee
to
ff8e5a3
Compare
The conflicts are resolved, I added one extra commit for using |
Here is the issue: #1719 |
Could you please rebase on top of develop? |
Make the method _cleanup_dir public and move it to tests/utils.py. Signed-off-by: Teodora Sechkova <[email protected]>
Reduce the number of function arguments and use DelegatedRole instead. When adding a list of delegations to the repository, move the Targets creation inside the loop to create a separate Targets object for each delegation. Create a new Metadata obgect only for delegated roles which do not exist yet in the repository. Signed-off-by: Teodora Sechkova <[email protected]>
Use a "try" block to catch exceptions during failing subtests and always execute the subtest clean up code. Signed-off-by: Teodora Sechkova <[email protected]>
ff8e5a3
to
c30933b
Compare
Several "rebases" happened after the last review. Should be ok. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm good to go with this as a start. Left some minor comments, please have a look and decide for yourself.
We do want more delegation tests in future. Some obvious ones:
- delegation paths that actually match the targetpath (current test never has any hits)
- delegation paths with directories (and wildcards in directories)
- path_hash_prefixes
def tearDown(self) -> None: | ||
if self.dump_dir is not None: | ||
self.sim.write() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nit: dumping here and not when client refreshes means that you only get the end state -- some of the tests do have multiple repository states (like test_new_timestamp_snapshot_rollback
) and those are not visible like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's right, I moved it to _run_refresh
and _init_updater
: 1ba9301
keyids: List[str] = field(default_factory=list) | ||
threshold: int = 1 | ||
terminating: bool = False | ||
paths: List[str] = field(default_factory=lambda: ["*"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: when you want to actually test path_hash_prefixes this needs to be optional... but we can fix that when it happens
Add tests creating delegations graphs with different complexity and successfully updating the delegated roles metadata. Signed-off-by: Teodora Sechkova <[email protected]>
Extend updater tests with the option to dump repository metadata locally. Signed-off-by: Teodora Sechkova <[email protected]>
Replace the use of "patch" in test_graph_traversal with the newly added fetch_tracker from RepositorySimulator. Signed-off-by: Teodora Sechkova <[email protected]>
Signed-off-by: Teodora Sechkova <[email protected]>
c30933b
to
2562aff
Compare
This is done in the follow up PR #1711
Yeah, this can be added I guess here #1711
Already an issue about it: #1639 (comment) |
Fixes #1643
Description of the changes being introduced by the pull request:
I added the first tests that came to my mind for traversing the delegations tree/graph. I'd like to receive some feedback on these before continuing further.
For each test/subtest:
Please verify and check that the pull request fulfills the following
requirements: