-
Notifications
You must be signed in to change notification settings - Fork 30.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Public/private key analogues for modules #19017
Conversation
Shutting down the connection is what `_final` is there for.
This is a building block that allows granting more authority to critical dependencies than to the longer tail of transitive dependencies. Please see use cases and discussion at https://gist.github.com/mikesamuel/bd653e9f69595f7b9d7dd4381a154e02 Known issues: * lib/frenemies is copied into lib/module.js -- I'm afraid I haven't figured out how to make lib/frenemies available as a builtin module. * the tests demonstrate basic behavior but don't exhaustively test common bypass techiniques. I tried to implement this purely in library code. https://github.com/mikesamuel/node-sec-patterns/tree/master/test/cases captures a bunch of bypass techniques including the one that it could not be done in user code. If this looks generally useful, I can transfer those tests to this context.
semver-major because of the addition of a new top level module. |
lib/module.js
Outdated
@@ -40,6 +40,197 @@ const preserveSymlinks = !!process.binding('config').preserveSymlinks; | |||
const experimentalModules = !!process.binding('config').experimentalModules; | |||
|
|||
const errors = require('internal/errors'); | |||
// Fails with no such built-in module: frenemies | |||
// TODO: why? Do I need to add something to a Makefile or load order | |||
// list somewhere? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you looking for node.gyp
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably. Thanks for pointing me at that. Afk, but will try that soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works like a charm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hi mike, thanks for your contribution!
unfortunately i just don't see the need for this. if something like this makes it into ecma262 that would be great, but node's security model has never included running untrusted code. in node its extremely difficult to prevent against all sorts of monkey patching with the cjs loader, which implies that your design doc' argument of "we're all adults here" not being a practical approach invalid as module would seemingly rely on it. additionally i'm not sure how a malicious module giving you a certificate of its behavior being from that malicious module helps prevent malicious modules. i see this module as authenticating a connection between two modules but it doesn't solve the issue of getting something from a module that you don't trust, and this condition was already achievable without this module.
I agree that untrusted dependencies can be annoying but i don't see this as a way to fix it. I'm also not quite sure but i believe you can break this by modifying the require cache for any modules which depend on it
i also left some notes below, and it seems you included a commit in your branch not related to this change
lib/frenemies.js
Outdated
/** | ||
* Maps opaque boxes to box data records. | ||
* This is a WeakMap that is not susceptible to WeakMap.prototype | ||
* monkeypatching. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use internal/safe_globals
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Added SafeWeakMap and SafeWeakSet to that module.
lib/frenemies.js
Outdated
|
||
/** | ||
* A set of all public keys. | ||
* Similarly monkeypatch resistant. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
test/parallel/alice.js
Outdated
@@ -0,0 +1,13 @@ | |||
// Alice sends a message to Bob for Bob's eyes only. | |||
'use strict'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tests should be named as test-{module}-{short description}
such as test-zlib-deflate-raw-inherits.js
and files that are dependencies of tests should be equally well named and moved into the relevant fixtures folder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
parallel/test-frenemies.js -> parallel/test-frenemies
parallel/{alice,bob,carol}.js -> fixtures/module-frenemies/box-and-unbox/
Thanks for giving it a read. Responses inline.
Running untrusted code in node is not a goal, and my threat model treats user loaders as trusted code. I work on a team of security engineers that works to improve libraries, frameworks, and programming languages to make it easier to produce secure & robust software. As part of that, we end up riding herd on a much larger group of application developers. One of the things that we really appreciate having in our toolbox is the ability to use linker and build system hooks to run code based on which modules connect to which others. That allows us to separate privileges, guide developers away from problematic APIs, and make sure we're in the loop when we need to be. This seems a fairly minimal tweak that enables reasoning about dynamic linking and a variety of other things that a security team riding herd on a large group of node projects might want to do.
You're right that, if this were a user space module, an attacker could MITM via the module cache. I thought builtins were not in the
Thanks much. |
Added lib/frenemies.js to node.gyp and got rid of inlined hack in lib/module.js Use SafeWeakMap and SafeWeakSet from internal/safe_globals instead of freezing methods used. Renamed added tests to follow naming conventions.
thanks for your reply. i understand you want to provide pipelines for passing data between modules but i still fail to understand how this enforces anything. from the perspective of the tests you created, this entire module can be made useless by just not involving "carol" again i love the idea of securing pipelines between modules but this doesn't make any sense to me. do you have any more real-world examples of where this behavior is useful? |
This test best approximates the Contract Values use case discusses preserving a contract between modules that create contract values and distant modules that consume them regardless of the flow of values between them. The Opaque Values use case also involves being insensitive to intermediaries. |
why do we need to wrap every require to do this? these "boxes" and "keys" are already by module id so as long as you trust which modules are installed on your box by id you can do this yourself at home: function render({ text, safe }) {
if (!safe)
text = widelyUsedHtmlSanitizer(text);
response.write(text);
});
function source1(html) {
const text = widelyUsedHtmlSanitizer(html);
render({ text, safe: true });
}
function source2(template, context) {
const text = autoescapingTemplateEngine(template, context);
render({ text, safe: true });
}
function source3(md) {
const text = mysteriousMarkdownConverter(md);
render({ text, safe: false });
} |
I trust some modules to do some things, and other modules to do other things. That I trust a module to do something does not mean that I trust it to produce HTML which is safe to load into an origin that receives sensitive user data. That I trust a module to colorize my log output does not mean I trust it to issue arbitrary shell commands. You can't get separation of privilege when trust is a binary decision.
In this scenario, the HTML sink would probably not require the HTML value producers, except as a way to obtain their public keys, and probably not even then if the whitelists are maintained in a separate configuration file. A small group of security engineers would probably maintain these whitelists across a large group of projects, embedding knowledge about which third-party modules have been found to need which privileges and have been reviewed closely enough that they're comfortable granting them those privileges. The idea is that, for a particular security failure mode, this small group of security specialists can put in place mechanisms that let them bound (to a high confidence) which files could cause that failure mode so they can focus attention on those files. The whitelists live in revision control, and so tracks that as code changes and dependencies are updated. |
don't use that module then. I can make a module that runs
wherever you're using your value producers then. this still seems like a nonissue if you have your own internal pipeline. i fail to see why this needs to be passed on to node itself.
at this point you're just getting back into the point of verifying dependencies, which someone in this situation you describe could be doing anyway. It still seems to me like the above code with properly locked deps, versions, etc would mitigate this issue you're describing. |
I appreciate you put thought and effort into this but I don't see us adopting this proposal. Conceptual: I don't think this is a problem we want to be solving. That might change if you get your TC39 proposal accepted. There is probably some overlap with the realms proposal. Practical: It's too easy to bypass. I can think of a few ways right off the bat so anyone who puts his mind to it can probably come up with something much more effective. |
Large projects don't pick and choose the majority of their dependencies. This change does not claim to prevent that, but any change which would needs to reify module identity which this does.
I think this gets to the core of our disagreement. In my mind, the security specialists are not writing their own internal pipeline. They're trying to make small changes in large systems built on others pipelines to get privilege separation. It tends to be easier to send a new value down a pipeline than to get a pipeline to use a callback with a specific signature. Being able to craft values that maintain their own integrity and that can be verified as safe by construction or from a well-understood source fits well into this.
No. What we are not verifying dependencies; we are trying to minimize the number of dependencies we need to verify. We assume that the code is written in good faith, but may not all be robust against untrusted inputs, and then we deploy a variety of techniques to allow some code to do risky things because its been vetted while managing the risk when code that hasn't deals badly with crafted inputs. |
Thanks for giving it a read. Might that change if I managed to convince security-wg that the goals were worthwhile?
I'm on drses. The realms proposal doesn't address these use cases. The frozen realm proposal does allow an embedder to load an untrusted module and avoid prototype poisoning.
Please do tell. If inducements help, I buy a drink for anyone who finds a bypass in my work should we ever meet IRL, even if its draft work like this. I'd consider something a bypass if
I assume that the mayOpen and ifFrom
Again, my goal with these assumptions is to remove the cover of plausible deniability from anyone who bypasses. Technical measures that move bypasses into name & shame territory provide value since we're treating 3rd party modules as trusted within bounds. |
If you are already thinking about proposing this to TC39 it sounds like it does not belong in Node itself? If they don't accept it we'd be stuck maintaining something that isn't necessary to be in core (by the sounds of it), and something that is also potentially inconsistent with the rest of the JS ecosystem. If they do accept it, we'd be stuck maintaining the inbuilt module regardless for quite a while due to probable ecosystem usage and various other complications with deprecating things. |
There's a programmatic debugger API that lets you live-edit code so e.g. this...
...is a done deal. You could counter that that particular hole can be plugged by restricting access to the debugger but that's entering an arms race. Not a problem we want to be solving.
That doesn't help with Proxy instances; |
I anticipated that in https://gist.github.com/mikesamuel/bd653e9f69595f7b9d7dd4381a154e02#failure-modes
How might one use a debugger API in this way while being able to plausibly deny
Good point, but I'm still not seeing how one would use this. Assuming a whitelist is an array or set constructed by the policy author, how can one use a proxy to cause an in array or in set check to falsely conclude true? // Policy definition
const policy = [0, 1, 2];
function verify(x) {
return policy.indexOf(x) >= 0;
}
// Attacker control code runs without direct access to array
// MITM by prototype poisoning gets access to array and substitutes result.
Array.prototype.indexOf = function () { console.log('Leak ' + this); return true; };
console.log(verify(3)); // Should be false but isn't. can be addressed by // Reliable access to builtins
const { hasOwnProperty } = Object;
const { apply } = Reflect;
// Policy definition
const policy = [0, 1, 2];
function verify(x) {
const candidate = x | 0;
let verified = false;
for (let i = 0, n = policy.length; i < n; ++i) {
if (policy[i] === candidate && i in policy && apply(hasOwnProperty, policy, [i])) {
verified = true;
}
}
return verified;
}
// Attacker controlled script runs without access to these locals.
// What goes here?
console.log(verify(3)); // Should be false. |
My understanding was that TC39 doesn't specify
I was planning on taking it to TC39 so if accepted it'd end up in CommonJS modules. User loader hooks are trusted code so can access private keys, and could provide backwards compatibility if the eventual interface differed from anything that Node exposed to ESM in the meantime. |
That seems like moving the goalposts. I read your comment as asking "is it possible to evade the policy?" and the answer to that is an unequivocal "yes." Circling back to the larger topic: discussions on locking down the module system have happened before. There has never been anything remotely close to consensus and I don't expect this time will be different. You're probably better served putting your efforts in the TC39 proposal. |
It's not moving the goalposts. Every security architecture is in the context of a TCB. For example, usually we trust the hardware and operating system to continue to work as they've worked in the past. I didn't make the TCB clear and apologize if that is a source of frustration. Anything "out of band" is usually considered better mitigated by hardening the TCB. If this got accepted, I'd probably write an advice for deployers document which said "make sure your source files are not writable by the
You've identified a promising line of attack, and it's not your responsibility as a reviewer to weaponize it. Again. It's an out of band attack though so if it's feasible to address it by hardening the TCB then it shouldn't be considered a show-stopper. If you have an argument as to why it should be considered in band, by all means. |
Fyi, the Stage 0 proposal is public and I'm seeking to have it on the May meeting agenda. It's largely the same material but repackaged in an ESM context. @devsnek, I think I addressed your specific code comments, but I'm still seeing "devsnek requested changes." If I have, I don't expect that that means you approve this PR by a long shot; just trying to clarify. @bnoordhuis, I had asked
Did you have any thoughts on whether security-wg is the right place to build consensus on whether this is a good/bad idea? @Fishrock123, did I address your concerns re pursuing CJS and ESM in parallel? |
They can endorse it but it's not within their remit to make the decision. |
Understood. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
explicit -1 from me, see conversation above
@devsnek I understand that you're at -1. You've no obligation to be convinced by my responses but I'm not sure how I could have responded to your comments more completely, so I'm not sure what you mean by "see conversation above." |
@mikesamuel sorry that was a note to other collabs here, i was telling them to see my above conversation with you for my reasoning on blocking this |
@nodejs/security PTAL |
As mentionned by others, it might be better to pursue the TC39 direction here. I'd be 👎 on this since I don't really see strong value to have it in core atm. |
Due to mentioned concerns of multiple collaborators I am going to close this. @mikesamuel thanks a lot for your contribution nevertheless! We value your input a lot and thank you very much for your effort! And if this is going to be accepted by the TC39, it will available even though it did not get into Node right now. |
Thanks all for your consideration. Anyone who wants to follow progress, can watch https://github.com/mikesamuel/tc39-module-keys where I will post updates until it dies or moves under the tc39 org. @BridgeAR, If this goes into TC39 it will only affect ESM modules, not CJS. |
module: allocate public/private key pair analogue for each module
Please see design and use cases.
This PR needs user documentation and should probably not be accepted until the next major version.
I couldn't figure out how to get a new
lib/*.js
file to load as a builtin, but it could otherwise be accepted. It does not currently address ESM. I am working on a private/public key proposal for ES modules via TC39.This adds a
frenemies
parameter to the function wrapper aroundrequire
d modulesthat provides an API like
frenemies.publicKey()
andexports.publicKey
a function that can be shared with other modulesfrenemies.privateKey(f)
callsf
in a context in which that the corresponding public key will return true.frenemies.box(val, mayOpen)
creates an opaque wrapper aroundval
that may be opened by any public key that passesmayOpen
frenemies.unbox(box, ifFrom, fallback)
returns the contents ofbox
if the boxer's public key passesifFrom
andfrenemies.publicKey
passes themayOpen
predicate supplied by the boxer. If these conditions are not met, returnsfallback
.Checklist
make -j4 test
(UNIX), orvcbuild test
(Windows) passesAffected core subsystem(s)
lib,module,src