-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Add an endpoint to check for nullifiers given block bounds and prefixes #404
Comments
One option is to modify the current message CheckNullifiersRequest {
// List of nullifiers to check.
repeated digest.Digest nullifiers = 1;
// Whether or not to return authentication paths for each found nullifier.
optional bool include_proof = 2;
} The main concern with this approach is that the user would tell the network exactly which nullifiers they are interested in. Another option is to introduce a new endpoint - e.g., something like message CheckNullifiersByPrefixRequest {
// List of nullifiers to check. Each nullifier is specified by its 16-bit prefix.
repeated uint32 nullifiers = 1;
// Block number from which to start checking nullifiers.
fixed32 start_block_num = 2;
} The endpoint would return nullifiers which match any of the specified prefixes and were created after message CheckNullifiersByPrefixResponse {
// List of nullifiers created between `request.start_block_num` and the last block in the chain.
repeated NullifierUpdate nullifiers = 1;
} The main concern with this endpoint is that it may return too much data (especially when the network operates at high TPS). For example, if 10K notes are consumed per second (using this as a proxy for 10K TPS), a request covering 1 week would return about 3MB per nullifier prefix. |
One approach to handle this is pagination and rate-limiting. Both request and response include an optional pagination token. In the simplest implementation its just an integer indicating how many records were already processed in previous requests. However this probably won't work well in this instance as for subsequent requests nullifiers have changed status. We could instead have the token represent the nullifiers already processed + the block number. The next request would then recheck that range nullifiers for any new blocks/nullifiers (presumably with very little new data), and then fill up any remaining response space with more nullifiers. Another issue with pagination is that a user might stop early once they've received the nullifier(s) they're actually interested in. So that leaks more info to the network. Additionally, users now have to aggregate responses i.e. nullifiers can change status over subsequent calls, and so will proofs maybe? I don't know enough about how the latter is implemented. Quite a bit more complicated than a once-off query though. Much simpler to instead throw an error if the caller requested too much data. |
I think this depends on how we decide to paginate. For example, if we do it based on block numbers, and return nullifiers that were created between block One approach I was thinking about more generally is to support "natural" pagination based on "epochs" (not just for this endpoint, but for other endpoints as well, and this endpoint may be a good one to think through the general idea). This would work like so: First, we define an epoch as Then, we change the response message to look like so: message CheckNullifiersByPrefixResponse {
// List of nullifiers created between `request.start_block_num` and the last block of the epoch
// defined by `request.start_block_num`.
repeated NullifierUpdate nullifiers = 1;
// Number of the latest block in the chain.
fixed32 chain_tip = 3;
} The core idea here is that the response will contain data for nullifiers created between The user, then, will keep making requests setting let mut start_block_num = <initial value for the first request>
let nullifiers = vec![<list of nullifier prefixes>];
loop {
let response = check_nullifiers_by_prefix(nullifiers, start_block_num);
todo!("process the response");
let epoch = epoch_of(start_block_num);
if epoch == epoch_of(response.chain_tip) {
// if we got to the end of the chain, break out of the loop
break;
} else {
block_num = first_block_of(epoch + 1);
}
}
fn epoch_of(block_num: u32) -> u16 {
(block_num >> 16) as u16
}
fn first_block_of(epoch: u16) -> u32 {
(epoch as u32) << 16
} The main benefit of this approach is that it should be pretty simple to implement. It also naturally supports sharding of the underlying data by epoch (this is not something we need now, but I anticipate that we probably will need in the future). The main drawback is that we don't fix the amount of data per response: some may be empty while others may still contain quite a bit of data. However, for this specific endpoint, I don't think this will be an issue as nullifiers should be randomly distributed and thus, variability across responses should be relatively low. We can also implement the approach suggested in #174 and stream all messages to the user to eliminate the need for many round trips (basically, the node would run the loop internally and stream messages to the user until chain tip is reached). |
Based on some more thinking described in 0xPolygonMiden/miden-client#405 (comment), it seems like we may not need a time-bounded endpoint. Instead, checking nullifiers "across all time" may be a better option - but we would do it by prefix rather than for a specific nullifier. So, message CheckNullifiersByPrefixRequest {
// List of nullifiers to check. Each nullifier is specified by its 16-bit prefix.
repeated uint32 nullifiers = 1;
}
message CheckNullifiersByPrefixResponse {
// List of nullifiers matching the 16-bit prefixes specified in the request.
repeated NullifierUpdate nullifiers = 1;
} We could put a pretty aggressive limit on the number of nullifier prefixes that can be submitted with the request to avoid dealing with pagination (we could even allow just a single prefix per request to simplify things). The response sizes should be manageable until we get to about 100 TPS, and after that we can bump up the prefix size to 20 or 24 bits. |
One other thought on this: maybe the request should look like: message CheckNullifiersByPrefixRequest {
// Number of bits used for nullifier prefix.
unit32 prefix_len = 1;
// List of nullifiers to check. Each nullifier is specified by its prefix with length equal
// to prefix_len
repeated uint32 nullifiers = 2;
} Right now, the only valid value for |
How should the |
You should reject non-16 bit requests I think. Or alternatively just ignore the value for now and assume it said 16. Once we have support for variable prefix lengths we would then switch this behaviour without breaking the API. Though I do question whether changing behaviour like that doesn't imply a breaking change despite not technically changing the API :D The more I think about it in the context of future proofing though, the more I would lean towards accepting any value and for now we just assume it "was" 16. That way no-ones code breaks when we do actually implement this correctly. The only thing that happens now is that the caller gets a different level of security as requested. |
I would reject any non-16 bit request for now. Adding support for more prefix lengths in the future should be a non-breaking change (unless we prohibit 16-bit prefixes). We can also discuss later on whether we want to allow any value here or limit it to a small set of possible values (e.g., 16, 20, 24). |
closed by #419 |
Originally posted by @bobbinth in 0xPolygonMiden/miden-client#405 (comment)
The text was updated successfully, but these errors were encountered: