-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache verification key in CLI #158
Comments
In what situations would would a dev would need to generate & deploy the same verification key? Only if deploying once to network A (i.e. Testnet) and again to network B (i.e. Mainnet)? If so, I could see some value but maybe low priority |
Actually I found an answer: If an error occurs during deployment (i.e. tx relayer unavailable, etc) then it would be convenient to have that cached for when re-trying again. |
Yes, anything where you get off the happy path. For example: Something in your code prevents deploy from working, and you want to iterate on that. Like the error that Serhii reported recently, it probably happened in the There are also cases where you break out of the flow even if no error occurs. For example, you get to the point where you have to set the transaction fee, but you are not sure what fee you should use, so you Ctrl-C and do it again later. These things happen all the time. |
That's a good use case. Adding 'to discuss' so we can decide an implementation approach |
@bkase to write RFC |
Mini RFC addressing this:
SummaryWe already have logic to do this in the OCaml and Rust backends. This RFC proposes we expose some of this logic over the JS bridge and outlines a mechanism by which to do so: Namely, take the OCaml one and plumb it alongside the other compile/proving functions that we've bridged over. Once we have a function that can compute a hash, we can cheaply check if the circuit has changed before doing a recompile. We can store this hash in a file in the BackgroundWe do already have existing logic for quickly computing a hash of the circuit; however, it's not exposed to the JS-side of SnarkyJS yet. On the Rust side, we use a SHA-256 hash: https://github.com/o1-labs/proof-systems/blob/master/kimchi/src/circuits/gate.rs#L174 On the OCaml side, we use an MD5 hash: https://github.com/MinaProtocol/mina/blob/6c61262618847f51debf4e59f556da03988c86b0/src/lib/crypto/kimchi_backend/common/plonk_constraint_system.ml#L561 It's unfortunate that there are two different hashes for circuits. Ideally these should be canonicallized, but it's out of scope for this RFC. Specific ChangesFrom the perspective of SnarkyJS, zkapp-cli, and any other zkApp developer tooling, the actual representation of this digest doesn't actually matter. What matters is that it's a stable hash that can be compared for equality. As such, we can arbitrarily pick the MD5 or SHA-256 one, and our choice doesn't need to be future-compatible. This format can change later, and it won't break any of our tooling. Therefore, it makes more sense to use whichever one is easier to wire into SnarkyJS. This seems to be the OCaml representation as the rest of the proving/compilation logic goes through that code path. In the Mina repo: At https://github.com/MinaProtocol/mina/blob/b0548428278e4d9f48de87d0e56648337a1527a9/src/lib/pickles/pickles.ml#L557 , we're already computing the constraint system digest as part of the compile logic; we can lift this part of the function into a new function that we stick on the Pickles module, call it In SnarkyJS: We'll expose a new In zkapp-cli: We create lazily create files in We can consider exposing a new command |
We could also store the digest values inside of a |
Very nice @bkase, this approach sounds like it should work and not too hard to implement! |
The only risk is corruption if there are multiple writes at the same time to this file, but as long as we're careful this should work 👍 |
true, but should be ok b/c we should gather all digests then write to disk once |
@bkase implemented this and Matthew and Gregor approved. Builds and makes artifact. @mitschabaude will help expose in SnarkyJS this week |
zkapp-cli caching approach For every smart contract that we build, we can quickly compute and store the hash of the contract in the build.json file using the
We can also store the corresponding verification key generated from compiling the smart contract in the build.json file with the property:
First we compute the digest of the current contract here in the generate verification key step of the deploy.js file.
We can also read and parse the build.json file here using readJsonSync.
If the current digest matches the previous digest string from the build.json file, the contract will not be recompiled and we can use the verification key generated from the previous compilation that is stored in the build.json file. If the current digest doesn't match the previous stored digest, we replace the old current digest with the new current digest in the build.json file. We then compile the smart contract.
After that, we replace the newly generated verification key with the previously stored verification key in build.json. We update both the the digest and the verification keys first, and write to build.json once with the updated values using writeJsonSync. Testing We can add tests to the deploy.tests.js file to validate this implementation and the generate verification key step. We can also manually generate contracts and deploy them from the CLI to test. Cases to test:
|
@ymekuria Looks like we can't use Lines 92 to 103 in e1f89a4
But we could add a {
"<smartContractNameA>": {
"digest": "abc",
"verificationKey": "abc",
},
"<smartContractnameB>": {
"digest": "abc",
"verificationKey": "abc",
}
} Note that there is a known limitation of |
Good catch about build.json. I like the idea of adding a cache.json file. |
@jasongitmail It looks like we can't put cache.json file in the build directory because the directory is emptied every time deploy is run here: Lines 87 to 90 in e1f89a4
We can add the cache.json file to the keys directory or create another directory at the root of the template and add it to gitignore. |
@ymekuria, I think we should just not empty the build directory in that case :D instead we could delete everything in there except cache.json FWIW, inside |
The keys dir isn't the right location because this is a file that a dev should never really see. +1 for changing line 88 to delete all files except the cache.json. I'd suggest reading |
When you go through the
zk deploy
flow multiple times, it's annoying that the veriffication key is rebuilt every time, which takes 1-2 minutesWould be nice to cache the veriffication key
TBD: how do we handle cache invalidation / enable rebuilding when it's needed. Ideally, we detect when the vk changed from a quick-to-compute hash of the circuit, but that would require more work on the snarkyjs side.
Some ideas for what we could do now:
zk deploy
zk compile
, which never uses a cache, and letzk deploy
just depend on the output it creates -- that way, rebuilding is always an explicit operationThe text was updated successfully, but these errors were encountered: