Skip to content

Commit

Permalink
fix: prevent PXE from making historical queries during note discovery (
Browse files Browse the repository at this point in the history
…#11406)

This fixes an issue introduced in
#10651 which
resulted in PXE querying note hash tree indexes at the block number in
which they were created, accidentally resulting in historical queries
that failed in long-running environments such as masternet, but none of
the e2e tests.

I addressed this by querying at the latest locally synced block number
instead, which in practice should be the tip of the chain since we sync
right before simulations. I explicitly avoided using `'latest'` to
prevent adding notes that don't exist in the locally synced state, since
a) that's not properly supported at the moment, b) such an instance
would likely be indicative of a bug in note discovery, and c) we
eventually want to make queries based on the block _hash_ so as to
detect network reorgs and fail loudly.

I also created a new e2e test which creates note a note in an old block,
forces it to be pruned, and then has PXE discover and use that note.
Triggering this bug required creating a note and then mining upwards of
64 blocks on top while having proving enabled, which we only ever did in
live networks.
  • Loading branch information
nventuro authored and Maddiaa0 committed Jan 22, 2025
1 parent 564303e commit e55e8bb
Show file tree
Hide file tree
Showing 4 changed files with 131 additions and 5 deletions.
1 change: 1 addition & 0 deletions scripts/ci/get_e2e_jobs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ allow_list=(
"e2e_max_block_number"
"e2e_nested_contract"
"e2e_ordering"
"e2e_pruned_blocks"
"e2e_static_calls"
"integration_l1_publisher"
"e2e_cheat_codes"
Expand Down
1 change: 1 addition & 0 deletions yarn-project/end-to-end/scripts/e2e_test_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ tests:
test_path: 'e2e_prover/full.test.ts'
env:
HARDWARE_CONCURRENCY: '32'
e2e_pruned_blocks: {}
e2e_public_testnet: {}
e2e_pxe:
use_compose: true
Expand Down
115 changes: 115 additions & 0 deletions yarn-project/end-to-end/src/e2e_pruned_blocks.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
import {
type AccountWallet,
type AztecAddress,
type AztecNode,
type Logger,
MerkleTreeId,
type Wallet,
sleep,
} from '@aztec/aztec.js';
import { type CheatCodes } from '@aztec/aztec.js/utils';
import { TokenContract } from '@aztec/noir-contracts.js/Token';

import { setup } from './fixtures/utils.js';

// Tests PXE interacting with a node that has pruned relevant blocks, preventing usage of the archive API (which PXE
// should not rely on).
describe('e2e_pruned_blocks', () => {
let logger: Logger;
let teardown: () => Promise<void>;

let aztecNode: AztecNode;
let cheatCodes: CheatCodes;

let wallets: AccountWallet[];

let adminWallet: Wallet;
let senderWallet: Wallet;

let admin: AztecAddress;
let sender: AztecAddress;
let recipient: AztecAddress;

let token: TokenContract;

const MINT_AMOUNT = 1000n;

// Don't make this value too high since we need to mine this number of empty blocks, which is relatively slow.
const WORLD_STATE_BLOCK_HISTORY = 2;
const WORLD_STATE_CHECK_INTERVAL_MS = 300;

beforeAll(async () => {
({ aztecNode, cheatCodes, logger, teardown, wallets } = await setup(3, {
worldStateBlockHistory: WORLD_STATE_BLOCK_HISTORY,
worldStateBlockCheckIntervalMS: WORLD_STATE_CHECK_INTERVAL_MS,
}));

[adminWallet, senderWallet] = wallets;
[admin, sender, recipient] = wallets.map(a => a.getAddress());

token = await TokenContract.deploy(adminWallet, admin, 'TEST', '$TST', 18).send().deployed();
logger.info(`L2 token contract deployed at ${token.address}`);
});

afterAll(() => teardown());

async function mineBlocks(blocks: number): Promise<void> {
// There's currently no cheatcode for mining blocks so we just create a couple dummy ones by calling a view function
for (let i = 0; i < blocks; i++) {
await token.methods.private_get_name().send().wait();
}
}

it('can discover and use notes created in both pruned and available blocks', async () => {
// This is the only test in this suite so it doesn't seem worthwhile to worry too much about reusable setup etc. For
// simplicity's sake I just did the entire thing here.

// We are going to mint two notes for the sender, each for half of a total amount, and then have the sender combine
// both in a transfer to the recipient. The catch is that enough blocks will be mined between the first and second
// mint transaction that the node will drop the block corresponding to the first mint, resulting in errors if PXE
// tried to access any historical information related to it (which it shouldn't).

const firstMintReceipt = await token
.withWallet(adminWallet)
.methods.mint_to_private(admin, sender, MINT_AMOUNT / 2n)
.send()
.wait();
const firstMintTxEffect = await aztecNode.getTxEffect(firstMintReceipt.txHash);

// mint_to_private should create just one new note with the minted amount
expect(firstMintTxEffect?.data.noteHashes.length).toEqual(1);
const mintedNote = firstMintTxEffect?.data.noteHashes[0];

// We now make a historical query for the leaf index at the block number in which this first note was created and
// check that we get a valid result, which indirectly means that the queried block has not yet been pruned.
expect(
(await aztecNode.findLeavesIndexes(firstMintReceipt.blockNumber!, MerkleTreeId.NOTE_HASH_TREE, [mintedNote!]))[0],
).toBeGreaterThan(0);

// We now mine dummy blocks, mark them as proven and wait for the node to process them, which should result in older
// blocks (notably the one with the minted note) being pruned.
await mineBlocks(WORLD_STATE_BLOCK_HISTORY);
await cheatCodes.rollup.markAsProven();
await sleep(WORLD_STATE_CHECK_INTERVAL_MS * 2);

// The same historical query we performed before should now fail since this block is not available anymore.
await expect(
aztecNode.findLeavesIndexes(firstMintReceipt.blockNumber!, MerkleTreeId.NOTE_HASH_TREE, [mintedNote!]),
).rejects.toThrow('Unable to find leaf');

// We've completed the setup we were interested in, and can now simply mint the second half of the amount, transfer
// the full amount to the recipient (which will require the sender to discover and prove both the old and new notes)
// and check that everything worked as expected.

await token
.withWallet(adminWallet)
.methods.mint_to_private(admin, sender, MINT_AMOUNT / 2n)
.send()
.wait();

await token.withWallet(senderWallet).methods.transfer(recipient, MINT_AMOUNT).send().wait();

expect(await token.methods.balance_of_private(recipient).simulate()).toEqual(MINT_AMOUNT);
expect(await token.methods.balance_of_private(sender).simulate()).toEqual(0n);
});
});
19 changes: 14 additions & 5 deletions yarn-project/pxe/src/simulator_oracle/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -743,21 +743,30 @@ export class SimulatorOracle implements DBOracle {
txHash: Fr,
recipient: AztecAddress,
): Promise<NoteDao> {
// We need to validate that the note does indeed exist in the world state to avoid adding notes that are then
// impossible to prove.

const receipt = await this.aztecNode.getTxReceipt(new TxHash(txHash));
if (receipt === undefined) {
throw new Error(`Failed to fetch tx receipt for tx hash ${txHash} when searching for note hashes`);
}
const { blockNumber, blockHash } = receipt;

// Siloed and unique hashes are computed by us instead of relying on values sent by the contract to make sure
// we're not e.g. storing notes that belong to some other contract, which would constitute a security breach.
const uniqueNoteHash = computeUniqueNoteHash(nonce, siloNoteHash(contractAddress, noteHash));
const siloedNullifier = siloNullifier(contractAddress, nullifier);

// We store notes by their index in the global note hash tree, which has the convenient side effect of validating
// note existence in said tree. Note that while this is technically a historical query, we perform it at the latest
// locally synced block number which *should* be recent enough to be available. We avoid querying at 'latest' since
// we want to avoid accidentally processing notes that only exist ahead in time of the locally synced state.
const syncedBlockNumber = await this.db.getBlockNumber();
const uniqueNoteHashTreeIndex = (
await this.aztecNode.findLeavesIndexes(blockNumber!, MerkleTreeId.NOTE_HASH_TREE, [uniqueNoteHash])
await this.aztecNode.findLeavesIndexes(syncedBlockNumber!, MerkleTreeId.NOTE_HASH_TREE, [uniqueNoteHash])
)[0];
if (uniqueNoteHashTreeIndex === undefined) {
throw new Error(
`Note hash ${noteHash} (uniqued as ${uniqueNoteHash}) is not present on the tree at block ${blockNumber} (from tx ${txHash})`,
`Note hash ${noteHash} (uniqued as ${uniqueNoteHash}) is not present on the tree at block ${syncedBlockNumber} (from tx ${txHash})`,
);
}

Expand All @@ -769,8 +778,8 @@ export class SimulatorOracle implements DBOracle {
noteHash,
siloedNullifier,
new TxHash(txHash),
blockNumber!,
blockHash!.toString(),
receipt.blockNumber!,
receipt.blockHash!.toString(),
uniqueNoteHashTreeIndex,
await recipient.toAddressPoint(),
NoteSelector.empty(), // todo: remove
Expand Down

0 comments on commit e55e8bb

Please sign in to comment.