Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce CommitOnly variant of Inputs #3419

Merged
merged 16 commits into from
Sep 7, 2020
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 24 additions & 24 deletions chain/src/chain.rs
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ use crate::types::{
BlockStatus, ChainAdapter, CommitPos, NoStatus, Options, Tip, TxHashsetWriteStatus,
};
use crate::util::secp::pedersen::{Commitment, RangeProof};
use crate::util::RwLock;
use crate::{util::RwLock, ChainStore};
use grin_store::Error::NotFoundErr;
use std::collections::HashMap;
use std::fs::{self, File};
Expand Down Expand Up @@ -171,6 +171,10 @@ impl Chain {
) -> Result<Chain, Error> {
let store = Arc::new(store::ChainStore::new(&db_root)?);

// DB migrations to be run prior to the chain being used.
// Migrate full blocks to protocol version v3.
Chain::migrate_db_v2_v3(&store)?;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason why don't need a lexical scope anymore?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was purely for visual/readability purposes before. It was not needed at the code level.


// open the txhashset, creating a new one if necessary
let mut txhashset = txhashset::TxHashSet::open(db_root.clone(), store.clone(), None)?;

Expand Down Expand Up @@ -218,12 +222,6 @@ impl Chain {
genesis: genesis.header,
};

// DB migrations to be run prior to the chain being used.
{
// Migrate full blocks to protocol version v2.
chain.migrate_db_v1_v2()?;
}

chain.log_heads()?;

Ok(chain)
Expand Down Expand Up @@ -275,11 +273,12 @@ impl Chain {
/// We also need to support relaying blocks with FeaturesAndCommit inputs to peers.
/// So we need a way to convert blocks from CommitOnly to FeaturesAndCommit.
/// Validating the inputs against the utxo_view allows us to look the outputs up.
fn convert_block_v2(&self, block: Block) -> Result<Block, Error> {
pub fn convert_block_v2(&self, block: Block) -> Result<Block, Error> {
debug!(
"convert_block_v2: {} at {}",
"convert_block_v2: {} at {} ({} -> v2)",
block.header.hash(),
block.header.height
block.header.height,
block.inputs().version_str(),
);

if block.inputs().is_empty() {
Expand All @@ -291,18 +290,19 @@ impl Chain {

let mut header_pmmr = self.header_pmmr.write();
let mut txhashset = self.txhashset.write();
let outputs =
let inputs: Vec<_> =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would find it easier to ready if inputs was named unspent_outputs.

txhashset::extending_readonly(&mut header_pmmr, &mut txhashset, |ext, batch| {
let previous_header = batch.get_previous_header(&block.header)?;
pipe::rewind_and_apply_fork(&previous_header, ext, batch)?;
ext.extension
.utxo_view(ext.header_extension)
.validate_inputs(block.inputs(), batch)
.map(|outputs| outputs.into_iter().map(|(out, _)| out).collect())
})?;
let outputs: Vec<_> = outputs.into_iter().map(|(out, _)| out).collect();
let inputs = inputs.as_slice().into();
Ok(Block {
header: block.header,
body: block.body.replace_inputs(outputs.into()),
body: block.body.replace_inputs(inputs),
})
}

Expand Down Expand Up @@ -397,8 +397,9 @@ impl Chain {
// Only do this once we know the header PoW is valid.
self.check_orphan(&b, opts)?;

// Convert block to FeaturesAndCommit inputs.
// We know this block is not an orphan and header is valid at this point.
// We can only reliably convert to "v2" if not an orphan (may spend output from previous block).
// We convert from "v3" to "v2" by looking up outputs to be spent.
// This conversion also ensures a block received in "v2" has valid input features (prevents malleability).
let b = self.convert_block_v2(b)?;

let (maybe_new_head, prev_head) = {
Expand Down Expand Up @@ -662,12 +663,12 @@ impl Chain {

/// Verify we are not attempting to spend a coinbase output
/// that has not yet sufficiently matured.
pub fn verify_coinbase_maturity(&self, tx: &Transaction) -> Result<(), Error> {
pub fn verify_coinbase_maturity(&self, inputs: Inputs) -> Result<(), Error> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering if is it any better to change the function signature to:

pub fn verify_coinbase_maturity(&self, inputs: &Inputs) -> Result<(), Error> {

In the current case Rust makes a copy of the Input no?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 I think this makes sense. Same for verify_inputs().

let height = self.next_block_height()?;
let header_pmmr = self.header_pmmr.read();
let txhashset = self.txhashset.read();
txhashset::utxo_view(&header_pmmr, &txhashset, |utxo, batch| {
utxo.verify_coinbase_maturity(&tx.inputs(), height, batch)?;
utxo.verify_coinbase_maturity(inputs, height, batch)?;
Ok(())
})
}
Expand Down Expand Up @@ -1403,14 +1404,13 @@ impl Chain {
self.header_pmmr.read().get_header_hash_by_height(height)
}

/// Migrate our local db from v1 to v2.
/// This covers blocks which themselves contain transactions.
/// Transaction kernels changed in v2 due to "variable size kernels".
fn migrate_db_v1_v2(&self) -> Result<(), Error> {
let store_v1 = self.store.with_version(ProtocolVersion(1));
let batch = store_v1.batch()?;
/// Migrate our local db from v2 to v3.
/// "commit only" inputs.
fn migrate_db_v2_v3(store: &ChainStore) -> Result<(), Error> {
let store_v2 = store.with_version(ProtocolVersion(2));
let batch = store_v2.batch()?;
for (_, block) in batch.blocks_iter()? {
batch.migrate_block(&block, ProtocolVersion(2))?;
batch.migrate_block(&block, ProtocolVersion(3))?;
}
batch.commit()?;
Ok(())
Expand Down
5 changes: 3 additions & 2 deletions chain/src/pipe.rs
Original file line number Diff line number Diff line change
Expand Up @@ -84,12 +84,13 @@ pub fn process_block(
ctx: &mut BlockContext<'_>,
) -> Result<(Option<Tip>, BlockHeader), Error> {
debug!(
"pipe: process_block {} at {} [in/out/kern: {}/{}/{}]",
"pipe: process_block {} at {} [in/out/kern: {}/{}/{}] ({})",
b.hash(),
b.header.height,
b.inputs().len(),
b.outputs().len(),
b.kernels().len(),
b.inputs().version_str(),
);

// Read current chain head from db via the batch.
Expand Down Expand Up @@ -423,7 +424,7 @@ fn verify_coinbase_maturity(
let header_extension = &ext.header_extension;
extension
.utxo_view(header_extension)
.verify_coinbase_maturity(&block.inputs(), block.header.height, batch)
.verify_coinbase_maturity(block.inputs(), block.header.height, batch)
}

/// Verify kernel sums across the full utxo and kernel sets based on block_sums
Expand Down
9 changes: 8 additions & 1 deletion chain/src/store.rs
Original file line number Diff line number Diff line change
Expand Up @@ -206,6 +206,13 @@ impl<'a> Batch<'a> {
/// Save the block to the db.
/// Note: the block header is not saved to the db here, assumes this has already been done.
pub fn save_block(&self, b: &Block) -> Result<(), Error> {
debug!(
"save_block: {} at {} ({} -> v{})",
b.header.hash(),
b.header.height,
b.inputs().version_str(),
self.db.protocol_version(),
);
self.db.put_ser(&to_key(BLOCK_PREFIX, b.hash())[..], b)?;
Ok(())
}
Expand All @@ -222,7 +229,7 @@ impl<'a> Batch<'a> {
/// Block may have been read using a previous protocol version but we do not actually care.
pub fn migrate_block(&self, b: &Block, version: ProtocolVersion) -> Result<(), Error> {
self.db
.put_ser_with_version(&to_key(BLOCK_PREFIX, &mut b.hash())[..], b, version)?;
.put_ser_with_version(&to_key(BLOCK_PREFIX, b.hash())[..], b, version)?;
Ok(())
}

Expand Down
9 changes: 5 additions & 4 deletions chain/src/txhashset/txhashset.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1371,13 +1371,14 @@ impl<'a> Extension<'a> {
}

// Update output_pos based on "unspending" all spent pos from this block.
// This is necessary to ensure the output_pos index correclty reflects a
// This is necessary to ensure the output_pos index correctly reflects a
// reused output commitment. For example an output at pos 1, spent, reused at pos 2.
// The output_pos index should be updated to reflect the old pos 1 when unspent.
if let Ok(spent) = spent {
let inputs: Vec<_> = block.inputs().into();
for (input, pos) in inputs.iter().zip(spent) {
batch.save_output_pos_height(&input.commitment(), pos)?;
for pos in spent {
if let Some(out) = self.output_pmmr.get_data(pos.pos) {
batch.save_output_pos_height(&out.commitment(), pos)?;
}
}
}

Expand Down
49 changes: 39 additions & 10 deletions chain/src/txhashset/utxo_view.rs
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,16 @@ impl<'a> UTXOView<'a> {
batch: &Batch<'_>,
) -> Result<Vec<(OutputIdentifier, CommitPos)>, Error> {
match inputs {
Inputs::CommitOnly(inputs) => {
let outputs_spent: Result<Vec<_>, Error> = inputs
.iter()
.map(|input| {
self.validate_input(input.commitment(), batch)
.and_then(|(out, pos)| Ok((out, pos)))
})
.collect();
outputs_spent
}
Inputs::FeaturesAndCommit(inputs) => {
let outputs_spent: Result<Vec<_>, Error> = inputs
.iter()
Expand All @@ -93,6 +103,7 @@ impl<'a> UTXOView<'a> {
if out == input.into() {
Ok((out, pos))
} else {
error!("input mismatch: {:?}, {:?}, {:?}", out, pos, input);
Err(ErrorKind::Other("input mismatch".into()).into())
}
Comment on lines 103 to 108
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So is this check redundant based on what we have at line 128-136 or the OutputFeatures feature check is important (for Inputs::FeaturesAndCommit variant only)?

Copy link
Member Author

@antiochp antiochp Sep 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the latter. We need to ensure that if the features are passed in that they are correct.
For CommitOnly variant we can safely skip this.

This is to maintain existing behavior and to avoid introducing malleability with the features being effectively ignored.

})
Expand All @@ -114,7 +125,15 @@ impl<'a> UTXOView<'a> {
let pos = batch.get_output_pos_height(&input)?;
if let Some(pos) = pos {
if let Some(out) = self.output_pmmr.get_data(pos.pos) {
return Ok((out, pos));
if out.commitment() == input {
return Ok((out, pos));
} else {
error!("input mismatch: {:?}, {:?}, {:?}", out, pos, input);
return Err(ErrorKind::Other(
"input mismatch (output_pos index mismatch?)".into(),
)
.into());
}
}
}
Err(ErrorKind::AlreadySpent(input).into())
Expand Down Expand Up @@ -147,21 +166,31 @@ impl<'a> UTXOView<'a> {
/// that have not sufficiently matured.
pub fn verify_coinbase_maturity(
&self,
inputs: &Inputs,
inputs: Inputs,
antiochp marked this conversation as resolved.
Show resolved Hide resolved
height: u64,
batch: &Batch<'_>,
) -> Result<(), Error> {
// Find the greatest output pos of any coinbase
// outputs we are attempting to spend.
let inputs: Vec<_> = inputs.into();
let pos = inputs

// Lookup the outputs being spent.
let spent: Result<Vec<_>, _> = inputs
.iter()
.filter(|x| x.is_coinbase())
.filter_map(|x| batch.get_output_pos(&x.commitment()).ok())
.max()
.unwrap_or(0);
.map(|x| self.validate_input(x.commitment(), batch))
.collect();

if pos > 0 {
// Find the max pos of any coinbase being spent.
let pos = spent?
.iter()
.filter_map(|(out, pos)| {
if out.features.is_coinbase() {
Some(pos.pos)
} else {
None
}
})
.max();
antiochp marked this conversation as resolved.
Show resolved Hide resolved

if let Some(pos) = pos {
// If we have not yet reached 1440 blocks then
// we can fail immediately as coinbase cannot be mature.
if height < global::coinbase_maturity() {
Expand Down
8 changes: 5 additions & 3 deletions chain/tests/test_coinbase_maturity.rs
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ fn test_coinbase_maturity() {

// Confirm the tx attempting to spend the coinbase output
// is not valid at the current block height given the current chain state.
match chain.verify_coinbase_maturity(&coinbase_txn) {
match chain.verify_coinbase_maturity(coinbase_txn.inputs()) {
Ok(_) => {}
Err(e) => match e.kind() {
ErrorKind::ImmatureCoinbase => {}
Expand Down Expand Up @@ -204,7 +204,7 @@ fn test_coinbase_maturity() {

// Confirm the tx attempting to spend the coinbase output
// is not valid at the current block height given the current chain state.
match chain.verify_coinbase_maturity(&coinbase_txn) {
match chain.verify_coinbase_maturity(coinbase_txn.inputs()) {
Ok(_) => {}
Err(e) => match e.kind() {
ErrorKind::ImmatureCoinbase => {}
Expand Down Expand Up @@ -254,7 +254,9 @@ fn test_coinbase_maturity() {

// Confirm the tx spending the coinbase output is now valid.
// The coinbase output has matured sufficiently based on current chain state.
chain.verify_coinbase_maturity(&coinbase_txn).unwrap();
chain
.verify_coinbase_maturity(coinbase_txn.inputs())
.unwrap();

let txs = &[coinbase_txn];
let fees = txs.iter().map(|tx| tx.fee()).sum();
Expand Down
7 changes: 3 additions & 4 deletions core/src/core/block.rs
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@ impl PMMRable for BlockHeader {
/// Serialization of a block header
impl Writeable for BlockHeader {
fn write<W: Writer>(&self, writer: &mut W) -> Result<(), ser::Error> {
if writer.serialization_mode() != ser::SerializationMode::Hash {
if !writer.serialization_mode().is_hash_mode() {
self.write_pre_pow(writer)?;
}
self.pow.write(writer)?;
Expand Down Expand Up @@ -507,8 +507,7 @@ impl Hashed for Block {
impl Writeable for Block {
fn write<W: Writer>(&self, writer: &mut W) -> Result<(), ser::Error> {
self.header.write(writer)?;

if writer.serialization_mode() != ser::SerializationMode::Hash {
if !writer.serialization_mode().is_hash_mode() {
self.body.write(writer)?;
}
Ok(())
Expand Down Expand Up @@ -609,7 +608,7 @@ impl Block {
kernels.extend_from_slice(cb.kern_full());

// Initialize a tx body and sort everything.
let body = TransactionBody::init(inputs, &outputs, &kernels, false)?;
let body = TransactionBody::init(inputs.into(), &outputs, &kernels, false)?;

// Finally return the full block.
// Note: we have not actually validated the block here,
Expand Down
Loading