Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.0: Define how artifacts are verified (automatically) #46

Closed
2 of 4 tasks
Tracked by #130
TomHennen opened this issue Jun 4, 2021 · 7 comments · Fixed by #571
Closed
2 of 4 tasks
Tracked by #130

v1.0: Define how artifacts are verified (automatically) #46

TomHennen opened this issue Jun 4, 2021 · 7 comments · Fixed by #571
Assignees
Labels
policy Policy / verification of provenance spec-change Modification to the spec (requirements, schema, etc.)

Comments

@TomHennen
Copy link
Contributor

TomHennen commented Jun 4, 2021

Edit 2022-10-17: This issue is being slightly repurposed to decide requirements and/or guidelines on how artifacts are automatically verified to meet the SLSA requirements.

Tasks:

  • Describe how the overall system works: builds generate provenance and downstream systems compare this to an expectation at various points (not sure where this should go)
  • levels.md: explain that users set expectations and downstream systems verify them
  • requirements.md: add detailed requirements and recommendations for how this system must/should work
  • VSA v1: Indicating SLSA versions for policy evaluation #168

Context: #130

Original post

There was a lot of discussion in #37 that seems to indicate there's still an apparent disconnect on policy (who produces/owns it) and evaluation (when/where to evaluate that policy).

I think, for the most part, we actually agree with one another. There are just some slight misunderstandings that are confusing us.

I think the following are the open questions:

  1. Who creates and owns the policy?
  2. Should we have a separate set or recommendations on what a good policy looks like?
  3. Should those requirements and recommendations be part of the SLSA levels (e.g. SLSA 3P) or separate from it?
  4. What use cases are we targeting?
    A. complete end-to-end: producer creates and artifact & provenance and those make their way all the way to the consumer who then checks the artifact & provenance against their own policy
    B. early evaluation: producer creates and artifact & provenance and at some point of trust (e.g. code signing, ACL'd folder, repo upload) that artifact & provenance against some other policy
  5. What do people mean when they talk about 'resource'?

Does that sound right?

My opinion on what the answers to these questions are:

  1. Who creates and owns the policy?

Some of the discussion in #37 has changed my mind.

So, now my opinion is "anyone that cares can create a policy".

The question itself is a bit of a misnomer. There isn't necessarily one single policy that should be evaluated. There could be many policies created and owned by different people. Anyone that cares about the artifact can create their own policy. The artifact producer themselves can create a policy if they want, but it's not required. Downstream users can use the producer's policy if they wish (and there's a way for them to get it securely), but they can also create their own policy (perhaps using create on first use as has been discussed in #37).

One thing we do have to watch out for when there are lots of policies is what happens if/when the software creators refactor how their code is built. E.g. changing the name of the Makefile target, switching repos, etc...

  1. Should we have a separate set of recommendations on what a good policy looks like?

Yes, I think so. I think having a separate clear set of requirements for policy would help. We have this internally and I think it's helpful.

  1. Should those requirements and recommendations be part of the SLSA levels (e.g. SLSA 3P) or separate from it?

I think it can be helpful to include them as a part of the SLSA levels, but I don't feel that strongly about it. A reason to do this is that it would be very easy for someone to create a bad policy that doesn't actually provide as much security benefit as they might like. E.g. "Any artifact built by GitHub Actions SLSA 3 builder". Since GitHub Actions is public a malicious actor could create whatever malicious artifact they want with GitHub Actions, and get the policy evaluator to accept that. There'd still be some protection since after the fact an audit could be done to see who built that thing and what source it came from (which might deter some, but not all actors).

  1. What use cases are we targeting?

I think we should target both A and B. A is the grand vision of where we want the entire industry to go. It might be easier to achieve in some cases than B, but in others it might be harder. B is a bit more limited but allows for adoption in cases that might otherwise be harder to implement. E.g. It would be much easier to protect the signing operation of an iOS app with SLSA than to add support for evaluating SLSA provenance into iOS.

  1. What do people mean when they talk about 'resource'?

This is just the name of the 'thing' or operation that is being protected by a policy.

Example: PyYAML

Let's assume that policy is evaluated in two places:

  1. GitHub Artifact Publishing Service (GAPS) which checks policy before uploading a package to a repo. This service manages the credentials needed to upload to the repo for the user.
  2. An upgraded PyPI client that checks policy before installing a package it got from PyPI.

For both 1 and 2 the systems need to know what policy to evaluate at any given time.

GAPS publishes lots of stuff, so it needs to lookup the policy for the thing being published. It might do this by looking up the policy in map based on the name of the thing it's publishing "pypi:pyyaml". How this policy is created and managed is up to GAPS.

The upgraded PyPI client, meanwhile, installs lots of packages. If it creates policy on first use it still needs to store that policy somewhere and it needs to be able to distinguish the policy for PyYAML from the policy for urllib3. So it might have a database where it can lookup policies based on the name of the thing it's installing. E.g. "pyyaml" or "urllib3".

The policy being evaluated by GAPS and the PyPI service could very well be different and that's totally fine.

It may be that some policies don't have an explicit resource, perhaps it's just some configuration flag that you add to a tool that only does one task. Perhaps when you configure GAPS for your project it doesn't used named policies it just has you provide the policy file. That's fine. But it's still probably the case that some thing (even if not explicit) is being protected by a policy, and that thing is what we call the resource.

@TomHennen
Copy link
Contributor Author

Here are some thoughts we've been having about how to do verification, etc... https://docs.google.com/document/d/11a3u-_CcHwzPRX8x-qFzQodQoF0qnwhrE546Lt7XI2E/edit?usp=sharing

Shared with https://groups.google.com/g/slsa-discussion

We'd love to get feedback from everyone else working on SLSA.

TomHennen added a commit to TomHennen/slsa that referenced this issue Jan 31, 2022
Fixes slsa-framework#107 by creating an attestation that indicates a `verifier`
has determined that the specified `subject` artifact(s) meets the
indicated SLSA level.

In addition this attestation also indicates the minimum aggregate
SLSA level met by the dependencies used to build the artifact
which can help to address slsa-framework#61.

This leaves a number of things up to the user:
* What it means to evaluate an artifact against policy. (See slsa-framework#46)
* How to communicate the attestations required to create this
    attestation.
* How to debug failures

I borrowed heavily from slsa.dev/provenance in terms of
formatting/documentation.

This attestation is based off or work done by @AdamZWu.

Signed-off-by: Tom Hennen <[email protected]>
TomHennen added a commit to TomHennen/slsa that referenced this issue Jan 31, 2022
Fixes slsa-framework#107 by creating an attestation that indicates a `verifier`
has determined that the specified `subject` artifact(s) meets the
indicated SLSA level.

In addition this attestation also indicates the minimum aggregate
SLSA level met by the dependencies used to build the artifact
which can help to address slsa-framework#61.

This leaves a number of things up to the user:
* What it means to evaluate an artifact against policy. (See slsa-framework#46)
* How to communicate the attestations required to create this
    attestation.
* How to debug failures

I borrowed heavily from slsa.dev/provenance in terms of
formatting/documentation.

If desired I can provide detailed algorithms for how the minimum_*
fields could be computed.

This attestation is based on work done by @AdamZWu.

Signed-off-by: Tom Hennen <[email protected]>
@MarkLodato
Copy link
Member

See also: #353, which is just about documentation on the provenance spec.

@MarkLodato
Copy link
Member

I think we need to resolve this for v1.0 of the specification. #503 is a start though it is insufficient to fully resolve this issue.

@joshuagl
Copy link
Member

I agree, this feels like something we should resolve prior to 1.0.

@MarkLodato MarkLodato changed the title Policy & Verification v1.0: Define how artifacts are verified (automatically) Oct 17, 2022
@MarkLodato MarkLodato added the spec-change Modification to the spec (requirements, schema, etc.) label Oct 17, 2022
@MarkLodato
Copy link
Member

I reworded this title so that it is more clear that this is about deciding the requirements for how artifacts are automatically verified. Manual verification of systems is covered in #508, and the overview of both is covered in #130.

@MarkLodato
Copy link
Member

For this particular issue, I suspect we might want to separate into the following, while avoiding the term "policy" since it is overloaded:

  • Roots of trust, i.e. which system I trust to be at which level. This will likely vary by consumer and involve delegation/accreditation.
  • Expectations for the artifact to prevent the threats covered by SLSA. Includes at least the source repo and likely how it was built. Not exactly defined by the producer or the consumer, but kind of an implicit agreement between them. Needs more thought.
  • Producer's additional requirements on publication. For example, don't let me upload an artifact that doesn't have sufficient testing or a fully formed SBOM, or only allow SLSA Build L3 with this root of trust.
  • Consumer's additional requirements on use/import. Likely the same sorts of checks as the producer's.

I suspect the SLSA spec should require the first two but leave the last two as optional.

@MarkLodato
Copy link
Member

Reminder: When adding "expectations" to the requirements, let's remember to address the concern from #498 (comment):

There are some builders that let users configure the build steps in a UI and store the config in a database somewhere. What would you want people to put in the provenance in that case?

With the provenance v1.0 proposal in #525, these would be "parameters". You could list either the values directly or their hashes. To verify, the consumer would need some way to know what was "expected", and effectively that is the real requirement that we were trying to get at.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
policy Policy / verification of provenance spec-change Modification to the spec (requirements, schema, etc.)
Projects
No open projects
Status: Done
Development

Successfully merging a pull request may close this issue.

3 participants