Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RSA signature concerns #25

Closed
roblabla opened this issue Aug 19, 2019 · 20 comments
Closed

RSA signature concerns #25

roblabla opened this issue Aug 19, 2019 · 20 comments

Comments

@roblabla
Copy link

roblabla commented Aug 19, 2019

In RSA, there are multiple schemes to sign/verify, mainly PKCS1 v1.5 and PSS (although others exists). I was looking at how to integrate signature into the RSA crate, and was a bit confused at the lack of a Padding argument or type in Signer/Verifier. Is the idea to implement Verifier on a type that is already aware of the padding scheme?

@newpavlov
Copy link
Member

newpavlov commented Aug 19, 2019

In my understanding rsa types should be generic over a hypothetical RsaPadding trait, e.g. rsa will define RsaPrivateKey<P: RsaPadding> and in turn will:

impl<P: RsaPadding> Signer for RsaPrivateKey<P> { .. }

@tarcieri
Copy link
Member

@roblabla a hypothetical rsa crate should probably follow the basic module structure I have in the yubihsm-rs crate (note that it incorporates encryption too, so it's a bit more complex than is needed for just the signature use case, although I think an rsa crate needs to be as well):

https://github.com/tendermint/yubihsm-rs/tree/develop/src/rsa

As @newpavlov mentioned, the signature crate is agnostic to the underlying implementation details of each signature algorithm, e.g. a hypothetical ECDSA crate needs to be generic over elliptic curve choice (P-256, P-384, secp256k1) and these details are irrelevant to RSA.

For an rsa crate I would suggest making a distinct signature type for RSASSA PKCS#1v1.5 and PSS, which are potentially unified by a common rsa::Signature trait which is in turn bounded by signature::Signature.

Note that PSS likely needs to be generic around a Digest suitable as a Mask Generating Function (MGF).

@tarcieri tarcieri changed the title How is padding handled by the signature crate? RSA signature concerns Aug 19, 2019
@tarcieri
Copy link
Member

Note: renamed the title to something a bit more generic as I think these are important concerns to capture somewhere

@dignifiedquire
Copy link
Member

I think I agree with the suggestions from @tarcieri on this, @roblabla I am very happy to break the rsa crates interface in order to get a better interface, so if you want to write up a proposal based on this for how the api could look like that would be great.

@roblabla
Copy link
Author

I picked this up again, trying to fit signature into the RSA crate. The more I look into it the more confused I am by the traits. Here are a couple of things I noticed:

Why does DigestVerifier take an instantiated Digest object as an argument? Is it expected to pass a Digest that was already given input? If not, then it should really just be a type parameter - the Digest trait has a new trait method. I can understand the DigestSigner taking the Digest (though I still find it to be an odd choice - IMO it should take a GenericArray<u8, Digest::OutputSize>). But what's the reasoning for Verifier?

All schemes based on RSA require some way of passing options to the signing scheme (again taking PSS as an example, we need a way to pass the salt size and an RNG for signing). I currently pass then in a constructor. This can be very clunky - since try_sign_digest takes an &self, I can't pass stateful parameters to the DigestSigner unless I put it behind a RefCell/Mutex. For instance:

struct RsaPssSigner<RNG> { key: RsaPrivateKey, rng: RefCell<RNG>, salt_len: Option<usize> };
impl<RNG> RsaPssSigner<RNG> {
    fn new(key: RsaPrivateKey, rng: RefCell::new(RNG) salt_len: Option<usize>);
}

impl<RNG: Rng, D: Digest, S: Signature> DigestSigner {
    fn try_sign_digest(&self, digest: D) -> Result<S, Error> {
        pss::sign(self.rng.borrow_mut(), &self.key, digest.output(), self.salt_len)
    }
}

// Usage:
let signer = RsaPssSigner::new(RsaPrivateKey::new(), rng::thread_rng(), salt_len: None);
let message = b"Hello World";
let mut hash = Sha256::new();
hash.input(message);
signer.try_sign_digest(hash);

This feels kinda wrong. I would have expected DigestSigner to take an associated type containing the "options" or something along those lines. E.G.

impl DigestSigner {
    type Options = (Option<usize>, &mut RNG);
    fn try_sign_digest(&self, digest: D, opts: Self::Options) -> Result<S, Error> {
        pss::sign(self.rng.borrow_mut(), &self.key, digest.output(), self.salt_len)
    }
}

And finally, on the error side of things, why not use an associated error type? The way signature currently handles errors on no_std will turn errors into an entirely unhelpful unit struct! 😱

@tarcieri
Copy link
Member

tarcieri commented Sep 25, 2019

I picked this up again, trying to fit signature into the RSA crate. The more I look into it the more confused I am by the traits.

That's okay, there are a lot of "hard won knowledge" in these traits I can link you to. I'm pretty sure we've tried pretty much everything you're suggesting and then decided to do the opposite for good reasons.

Why does DigestVerifier take an instantiated Digest object as an argument? Is it expected to pass a Digest that was already given input? If not, then it should really just be a type parameter. the Digest trait has a new trait method. I can understand the DigestSigner taking the Digest (though I still find it to be an odd choice - IMO it should take a GenericArray<u8, Digest::OutputSize>). But what's the reasoning for Verifier?

We originally did exactly what I think you're suggesting: we took a raw digest as a GenericArray<u8> with a size specified by an associated type of the Digest. You can find the discussion about that here:

#12 (comment)

So why change it to a Digest instance? That happened here, which also discusses the reasoning:

#17

The main one, in my mind, is most signature schemes proven secure in the random oracle model (ROM) rely on it critically for security, so requiring the random oracle (i.e. the Digest) as part of the API improves misuse resistance and decreases the chances of an attacker providing a raw digest which contains a value they solved for. This property holds true for any systems proven secure in ROM, including RSA-PSS.

An attacker who can convince you to take a malicious digest output to verify can also (often trivially) trick you into believing a signature which verifies when it does not: the security of these systems critically hinges on the output of the random oracle not being a parameter an attacker can choose, and if they can, it can be trivial for the attacker to forge one.

Making the random oracle a first-class part of the API avoids this sharp edge.

All schemes based on RSA require some way of passing options to the signing scheme

RSA is not alone here. ECDSA needs to know:

  • The elliptic curve
  • The digest function
  • The signature type to produce (ASN.1 DER or "fixed")

I currently pass then in a constructor. This can be very clunky - since try_sign_digest takes an &self, I can't pass stateful parameters to the DigestSigner unless I put it behind a RefCell/Mutex.

Seems like what you really want is a &mut self, in which case I think we could add a corresponding trait for stateful signature algorithms, much like we did for Aead vs AeadMut in the aead crate:

https://docs.rs/aead/

And finally, on the error side of things, why not use an associated error type?

Associated types aren't object safe, and I personally use these traits extensively as part of object-safe APIs which provide keychains/keystores which are backed by multiple key storage methods (e.g. HSMs).

We can investigate adding a set of corresponding non-object-safe traits and some blanket impls for them, much like the ones that exist in some of the other RustCrypto crates, however....

The way signature currently handles errors on no_std will turn errors into an entirely unhelpful unit struct!

Aside from HSM use cases where it's helpful to pass back things like authentication, I/O, or other "I can't talk to the HSM" errors, there's ample history of "rich" error types emitted from cryptographic APIs or protocols being exploited by attackers as sidechannels. For this reason many modern cryptographic libraries use opaque error types, which in as much as they are "unhelpful", are also unhelpful to attackers. Here is ring's error type, for example. We also used an opaque error type in the aead crate.

All that said, much of what you have brought up are debatable points we have waffled on. None of this is set in stone and can still be changed (or changed back), but especially in cases where we tried one thing and went the other way (namely DigestVerifier) I'd suggest reading the historical discussion.

@roblabla
Copy link
Author

Thanks for taking the time to answer this :).

I'm pretty sure we've tried pretty much everything you're suggesting and then decided to do the opposite for good reasons.

Part of the problem, IMO, is that a lot of this design seems to be hidden in the commit history/issues/PRs and the current design decisions aren't written down anywhere with a clear rationale. This should all be documented somewhere. Of course, I'm clearly spoiled by Rust's RFC system (which allows getting very clear rationale of every decision in the rust language) but a lot of what's explained here would probably be good to have in the signature documentation somewhere.

DigestSigner taking the Digest

OK, I can understand the rationale behind that. But what about DigestVerifier? Why does it take a Digest? It shouldn't take anything except the message and self.

Seems like what you really want is a &mut self

I guess that would work. I still find such stateful APIs kinda ugly. IMO, the RNG should be passed at each RSA call site by the user. But I can make peace with this.

We can investigate adding a set of corresponding non-object-safe traits and some blanket impls for them, much like the ones that exist in some of the other RustCrypto crates, however....

That would be good. I'm pretty sure an object safe layer could be made on top of a non-object-safe version, in similar spirit to erased-serde. Erased-serde also uses associated types for the error, so it should surely be doable. I'll try to do some prototyping tomorrow.

Aside from HSM use cases where it's helpful to pass back things like authentication, I/O, or other "I can't talk to the HSM" errors, there's ample history of "rich" error types emitted from cryptographic APIs or protocols being exploited by attackers as sidechannels.

I am painfully aware of this. That said, I have at least one case that I need to bubble up that I think is valid, and that's an error when the user-provided RNG is failing to provide random data (likely due to low entropy).

@tarcieri
Copy link
Member

tarcieri commented Sep 25, 2019

Part of the problem, IMO, is that a lot of this design seems to be hidden in the commit history/issues/PRs and the current design decisions aren't written down anywhere with a clear rationale. This should all be documented somewhere.

Good point! @newpavlov any thoughts on how we should do that?

But what about DigestVerifier? Why does it take a Digest? It shouldn't take anything except the message and self.

In cryptographic API design terminology, the goal of DigestVerifier is to provide an Initialize-Update-Finalize (IUF) API. This allows for signatures to be computed over large messages incrementally, or for one system to hash the message before giving it to some sort of signing oracle (e.g. the HSM).

The API for "the message and self" is Signer, and if you enable the signature_derive Cargo feature of the signature crate, you can #[derive(Signer)] for types which impl DigestSigner:

https://github.com/tendermint/signatory/blob/develop/signatory-secp256k1/src/lib.rs#L19

(we've played around with several other strategies for this including blanket impls, but haven't found another which addresses all concerns. It's something else I think worth potentially revisiting)

I guess that would work. I still find such stateful APIs kinda ugly. IMO, the RNG should be passed at each RSA call site by the user. But I can make peace with this.

Nondeterministic ECDSA is in a similar boat. I think we could add an API like this, but if we did, it would need a new trait (e.g. RandomizedSigner).

I am painfully aware of this. That said, I have at least one case that I need to bubble up that I think is valid, and that's an error when the user-provided RNG is failing to provide random data (likely due to low entropy).

@newpavlov is painfully aware of this as well with all of his work on the getrandom crate. That said, this seems to be a bit of a topic du jour with Linus fighting a misuse resistant kernel RNG API for Linux under the presumption of "entropy exhaustion", and much of the cryptography community fighting him on the grounds that entropy exhaustion doesn't exist and a well-seeded RNG, particularly one which some sort of key erasure mechanism, will never be exhausted.

Personally my suggestion in the case of any RNG failure would be to panic. Curious what @newpavlov thinks.

@newpavlov
Copy link
Member

I'm clearly spoiled by Rust's RFC system (which allows getting very clear rationale of every decision in the rust language)

I think there was a proposal to add a repo for RFCs, à la https://github.com/tokio-rs/tokio-rfcs (I think it was you @tarcieri?), but I didn't got to it yet, since I am not sure how useful it will be in practice.

that's an error when the user-provided RNG is failing to provide random data (likely due to low entropy)

Note that currently rand and getrandom in particular chose to always block if entropy pool has not initialized yet and after that they use APIs which do not block if entropy estimate is low.

Though I am not sure why your use-case can not be handled by the current error type.

@newpavlov
Copy link
Member

newpavlov commented Sep 25, 2019

Good point! @newpavlov any thoughts on how we should do that?

I think for documenting API rationale the easiest approach will be to create an empty rationale module with documentation describing it. It will be similar to that csv does for cookbook and tutorial. We may do the RFCs repo, but since APIs are still pretty much in flux, I don't think it will be useful enough.

Personally my suggestion in the case of any RNG failure would be to panic.

We already have everything in place to handle RNG errors gracefully (e.g. the Signer::try_sign method), don't we? Well, assuming we will get the &mut trait variant. Or am I missing something?

@tarcieri
Copy link
Member

@newpavlov sure, that works (on both counts)

@newpavlov
Copy link
Member

BTW I think we can work with the stateless traits by constraining RNGs with the Default trait. In production users should probably use ThreadRng or OsRng either way, and for tests we can wrap PRNG in a new type with Default imple which will use a constant seed.

@tarcieri
Copy link
Member

@newpavlov embedded RNGs seeded from a hardware peripheral almost certainly can't impl Default: rust-embedded/wg#353

My personal preference for those would be to add either a stateful trait or RandomizedSigner / RandomizedDigestSigner trait which can take a &mut impl CryptoRng or thereabouts.

@roblabla
Copy link
Author

@newpavlov

In nostd, the current error type is a ZST. I can’t return any information on the error (like a message), unlike std errors which contains a boxed cause. My primary use of RSA will be in a nostd environment. That’s what’s bugging me wrt the error type.

@tarcieri
Copy link
Member

@roblabla I think @newpavlov is saying you should return the opaque error type in that case.

Again, the counterpoint is opaque error types are good for security because they eliminate information an attacker could use to e.g. recover the signature key or forge a signature verification as part of an adaptive online attack.

Having more than one error type which an attacker is able to observe is pretty much the classical way in which every practical sidechannel attack (e.g. BB'98, particularly relevant to RSA) has been implemented. The only way of preventing these attacks is to not even give the attacker one bit.

@roblabla
Copy link
Author

@tarcieri Then why does the error type have a non-opaque cause when built under std? This is inconsistent. Even under your use-case of HSMs, no_std (such as embedded) crates that want to interface with a hardware module are currently unable to communicate what the error is (beyond the fact that an error occured).

@tarcieri
Copy link
Member

As you mentioned, it's specifically for the use case of relaying HSM errors which occur prior to actually performing the cryptographic operation, and indeed even this was a bit contentious.

no_std (such as embedded) crates that want to interface with a hardware module

This is true, although in my experience as someone who both interfaces with existing HSMs and is working on embedded cryptographic applications, I'm either communicating with an HSM from a std target, or working on embedded applications that are effectively HSMs in and of themselves, as opposed to trying to communicate with an external one.

If you have a use case for communicating with an external HSM from a no_std target and wanting to relay some additional error information, we can look into what changes would be required to support that use case.

@newpavlov
Copy link
Member

Well, designing extendable errors which will work on both std and no_std is really not an easy feat... We could copy the approach used right now by rand_core, it works more or less well (although not without issues) because we have only one class of errors (i.e. "RNG errors", well, strictly speaking we have two classes "system RNG errors" with the most significant bit of the error code equal to zero, and "custom errors"). But I am not sure how it would work in our case, especially if we want to pass RNG errors as well (i.e. error codes should not conflict).

@tarcieri
Copy link
Member

tarcieri commented Sep 25, 2019

Well, designing extendable errors which will work on both std and no_std is really not an easy feat...

Indeed. failure tried to span this gap with failure::Fail. I don't think people were happy with the result, but it does solve the problem of "one trait to box regardless of std or no_std" (provided you're happy with using alloc::boxed::Box on no_std).

I think you either wind up with two incompatible approaches, or unnecessarily constraining the errors expressible on std.

For external signers, be they HSM, TPM, SEP, or network KMS, I think it's very important to be able to include rich error information for debugging. A network KMS might return an encrypted error bytestring, for example, which can be decrypted to determine some sort of e.g. AuthZ error around a key usage policy.

This was the motivation for using Box<std::error::Error>, since these usages and the associated errors which can occur vary dramatically.

@tarcieri
Copy link
Member

tarcieri commented Mar 8, 2020

I've added quite a bit of documentation which hopefully covers some of the design rationale questions highlighted in this thread to the signature crate's rustdoc:

https://docs.rs/signature/1.0.0-pre.3/signature/index.html

I've also opened a 2-week final comment period prior to shipping a 1.0:

RustCrypto/traits#78

I'm trying to direct any remaining final comments to that issue, so I'll close this one.

@tarcieri tarcieri closed this as completed Mar 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants