Skip to content

Commit

Permalink
merge before audit (#3431)
Browse files Browse the repository at this point in the history
* Three new globals for to help contract-to-contract usability

* detritis

* Check error

* doc comments

* Impose limits on the entire "tree" of inner calls.

This also increases the realism of testing of multiple app calls in a
group by creating the EvalParams with the real constructor, thus
getting the pooling stuff tested here without playing games
manipulating the ep after construction.

* Move appID tracking into EvalContext, out of LedgerForLogic

This change increases the seperation between AVM execution and the
ledger being used to lookup resources.  Previously, the ledger kept
track of the appID being executed, to offer a narrower interface to
those resources. But now, with app-to-app calls, the appID being
executed must change, and the AVM needs to maintain the current appID.

* Stupid linter

* Fix unit tests error messages

* Allow access to resources created in the same transaction group

The method will be reworked, but the tests are correct and want to get
them visible to team.

* Access to apps created in group

Also adds some tests that are currently skipped for testing
- access to addresses of newly created apps
- use of gaid in inner transactions

Both require some work to implement the thing being tested.

* Remove tracked created mechanism in favor of examining applydata.

* Allow v6 AVM code to use in-group created asas, apps (& their accts)

One exception - apps can not mutate (put or del) keys from the app
accounts, because EvalDelta cannot encode such changes.

* lint docs

* typo

* The review dog needs obedience training.

* Use one EvalParams for logic evals, another for apps in dry run

We used to use one ep per transaction, shared between sig and and
app. But the new model of ep usage is to keep using one while
evaluating an entire group.

The app ep is now built logic.NewAppEvalParams which, hopefully, will
prevent some bugs when we change something in the EvalParams and don't
reflect it in what was a "raw" EvalParams construction in debugger and
dry run.

* Use logic.NewAppEvalParams to decrease copying and bugs in debugger

* Simplify use of NewEvalParams. No more nil return when no apps.

This way, NewEvalParams can be used for all creations of EvalParams,
whether they are intended for logicsig or app use, greatly simplifying
the way we make them for use by dry run or debugger (where they serve
double duty).

* Remove explicit PastSideEffects handling in tealdbg

* Always create EvalParams to evaluate a transaction group.

We used to have an optimization to avoid creating EvalParams unless
there was an app call in the transaction group.  But the interface to
allow transaction processing to communicate changes into the
EvalParams is complicated by that (we must only do it if there is
one!)

This also allows us to use the same construction function for eps
created for app and logic evaluation, simplifying dry-run and
debugger.

The optimization is less needed now anyway:
1) The ep is now shared for the whole group, so it's only one.
2) The ep is smaller now, as we only store nil pointers instead of
larger scratch space objects for non-app calls.

* Correct mistaken commit

* Spec improvments

* More spec improvments, including resource "availability"

* Recursively return inner transaction tree

* Lint

* No need for ConfirmedRound, so don't deref a nil pointer!

* license check

* Shut up, dawg.

* testing: Fix unit test TestAsyncTelemetryHook_QueueDepth (#2685)

Fix the unit test TestAsyncTelemetryHook_QueueDepth

* Deprecate `FastPartitionRecovery` from `ConsensusParams` (#3386)

## Summary

This PR removes `FastPartitionRecovery` option from consensus parameters. The code now acts as if this value is set to true.

Closes algorand/go-algorand-internal#1830.

## Test Plan

None.

* base64 merge cleanup

* Remaking a PR for CI (#3398)

* Allow setting manager, reserve, freeze, and clawback at goal asset create

* Add e2e tests

* Add more tests for goal asset create flags

Co-authored-by: Fionna <[email protected]>

* Remove the extraneous field type arrays.

* bsqrt

* acct_holding_get, a unified opcode for account field access

* Thanks, dawg

* [Other] CircleCI pipeline change for binary uploads (#3381)

For nightly builds ("rel/nightly"), we want to have deadlock enabled.

For rel/beta and rel/stable, we want to make sure we can build and upload a binary with deadlock disabled so that it can be used for release testing and validation purposes.

* signer.KeyDilution need not depend on config package (#3265)

crypto package need not depend on config.
There is an unnecessary dependency on config.

signer.KeyDilution takes the `config.ConsensusParams` as argument to pick the DefaultKeyDilution from it. 
This introduces dependency from the crypto package to config package. 
Instead, only the DefaultKeyDilution value can be passed to signer.KeyDilution.

* CR and more spec simplification

* algodump is a tcpdump-like tool for algod's network protocol (#3166)

This PR introduces algodump, a tcpdump-like tool for monitoring algod network messages.

* Removing C/crypto dependencies from `data/abi` package (#3375)

* Feature Networks pipeline related changes (#3393)

Added support for not having certain files in signing script

* e2e test for inner transaction appls

* testing: Add slightly more coverage to TestAcctUpdatesLookupRetry (#3384)

Add slightly more coverage to TestAcctUpdatesLookupRetry

* add context to (most) agreement logged writes (#3411)

Current agreement code only writes a `context : agreement` to a subset of the logged messages.
This change extends the said entry, which would make it easier to pre-process logs entries by their corresponding component. The change in this PR is focused on:
1. make sure that the "root" agreement logger always injects the `context : agreement` argument.
2. change the various locations in the agreement code to use the root agreement logger instead of referring to the application-global instance (`logging.Base()`).

* network: faster node shutdown (#3416)

During the node shutdown, all the current outgoing connections are being disconnected.
Since these connections are web sockets, they require a close connection message to be sent.
However, sending this message can take a while, and in situations where the other party has already shut down, we might never get a response. That, in turn, would lead the node waiting until the deadline is reached.

The current deadline was 5 seconds. This PR changes the deadline during shutdown to be 50ms.

* Give max group size * 16 inner txns, regardless of apps present

* Adjust test for allowing 256 inners

Co-authored-by: Tsachi Herman <[email protected]>
Co-authored-by: Tolik Zinovyev <[email protected]>
Co-authored-by: Jack <[email protected]>
Co-authored-by: Fionna <[email protected]>
Co-authored-by: algobarb <[email protected]>
Co-authored-by: Shant Karakashian <[email protected]>
Co-authored-by: Nickolai Zeldovich <[email protected]>
Co-authored-by: Hang Su <[email protected]>
Co-authored-by: chris erway <[email protected]>
  • Loading branch information
10 people authored Jan 18, 2022
1 parent a106e83 commit 83bccc5
Show file tree
Hide file tree
Showing 16 changed files with 82 additions and 76 deletions.
2 changes: 1 addition & 1 deletion agreement/demux.go
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ func (d *demux) tokenizeMessages(ctx context.Context, net Network, tag protocol.

o, err := tokenize(raw.Data)
if err != nil {
logging.Base().Warnf("disconnecting from peer: error decoding message tagged %v: %v", tag, err)
d.log.Warnf("disconnecting from peer: error decoding message tagged %v: %v", tag, err)
net.Disconnect(raw.MessageHandle)
d.UpdateEventsQueue(eventQueueTokenizing[tag], 0)
continue
Expand Down
12 changes: 4 additions & 8 deletions agreement/listener.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,6 @@

package agreement

import (
"github.com/algorand/go-algorand/logging"
)

// A listener is a state machine which can handle events, returning new events.
type listener interface {
// T returns the stateMachineTag describing the listener.
Expand Down Expand Up @@ -60,17 +56,17 @@ func (l checkedListener) handle(r routerHandle, p player, in event) event {
errs := l.pre(p, in)
if len(errs) != 0 {
for _, err := range errs {
logging.Base().Errorf("%v: precondition violated: %v", l.T(), err)
r.t.log.Errorf("%v: precondition violated: %v", l.T(), err)
}
logging.Base().Panicf("%v: precondition violated: %v", l.T(), errs[0])
r.t.log.Panicf("%v: precondition violated: %v", l.T(), errs[0])
}
out := l.listener.handle(r, p, in)
errs = l.post(p, in, out)
if len(errs) != 0 {
for _, err := range errs {
logging.Base().Errorf("%v: postcondition violated: %v", l.T(), err)
r.t.log.Errorf("%v: postcondition violated: %v", l.T(), err)
}
logging.Base().Panicf("%v: postcondition violated: %v", l.T(), errs[0])
r.t.log.Panicf("%v: postcondition violated: %v", l.T(), errs[0])
}
return out
}
24 changes: 12 additions & 12 deletions agreement/persistence.go
Original file line number Diff line number Diff line change
Expand Up @@ -82,15 +82,15 @@ func persist(log serviceLogger, crash db.Accessor, Round basics.Round, Period pe
return
}

logging.Base().Errorf("persisting failure: %v", err)
log.Errorf("persisting failure: %v", err)
return
}

// reset deletes the existing recovery state from database.
//
// In case it's unable to clear the Service table, an error would get logged.
func reset(log logging.Logger, crash db.Accessor) {
logging.Base().Infof("reset (agreement): resetting crash state")
log.Infof("reset (agreement): resetting crash state")

err := crash.Atomic(func(ctx context.Context, tx *sql.Tx) (err error) {
// we could not retrieve our state, so wipe it
Expand All @@ -99,7 +99,7 @@ func reset(log logging.Logger, crash db.Accessor) {
})

if err != nil {
logging.Base().Warnf("reset (agreement): failed to clear Service table - %v", err)
log.Warnf("reset (agreement): failed to clear Service table - %v", err)
}
}

Expand All @@ -124,7 +124,7 @@ func restore(log logging.Logger, crash db.Accessor) (raw []byte, err error) {
if err == nil {
// the above call was completed sucecssfully, which means that we've just created the table ( which wasn't there ! ).
// in that case, the table is guaranteed to be empty, and therefore we can return right here.
logging.Base().Infof("restore (agreement): crash state table initialized")
log.Infof("restore (agreement): crash state table initialized")
err = errNoCrashStateAvailable
return
}
Expand All @@ -135,7 +135,7 @@ func restore(log logging.Logger, crash db.Accessor) (raw []byte, err error) {
if !reset {
return
}
logging.Base().Infof("restore (agreement): resetting crash state")
log.Infof("restore (agreement): resetting crash state")

// we could not retrieve our state, so wipe it
_, err = tx.Exec("delete from Service")
Expand All @@ -149,12 +149,12 @@ func restore(log logging.Logger, crash db.Accessor) (raw []byte, err error) {
row := tx.QueryRow("select count(*) from Service")
err := row.Scan(&nrows)
if err != nil {
logging.Base().Errorf("restore (agreement): could not query raw state: %v", err)
log.Errorf("restore (agreement): could not query raw state: %v", err)
reset = true
return err
}
if nrows != 1 {
logging.Base().Infof("restore (agreement): crash state not found (n = %d)", nrows)
log.Infof("restore (agreement): crash state not found (n = %d)", nrows)
reset = true
noCrashState = true // this is a normal case (we have leftover crash state from an old round)
return errNoCrashStateAvailable
Expand All @@ -163,7 +163,7 @@ func restore(log logging.Logger, crash db.Accessor) (raw []byte, err error) {
row = tx.QueryRow("select data from Service")
err = row.Scan(&raw)
if err != nil {
logging.Base().Errorf("restore (agreement): could not read crash state raw data: %v", err)
log.Errorf("restore (agreement): could not read crash state raw data: %v", err)
reset = true
return err
}
Expand All @@ -176,7 +176,7 @@ func restore(log logging.Logger, crash db.Accessor) (raw []byte, err error) {
// decode process the incoming raw bytes array and attempt to reconstruct the agreement state objects.
//
// In all decoding errors, it returns the error code in err
func decode(raw []byte, t0 timers.Clock) (t timers.Clock, rr rootRouter, p player, a []action, err error) {
func decode(raw []byte, t0 timers.Clock, log serviceLogger) (t timers.Clock, rr rootRouter, p player, a []action, err error) {
var t2 timers.Clock
var rr2 rootRouter
var p2 player
Expand All @@ -185,7 +185,7 @@ func decode(raw []byte, t0 timers.Clock) (t timers.Clock, rr rootRouter, p playe

err = protocol.DecodeReflect(raw, &s)
if err != nil {
logging.Base().Errorf("decode (agreement): error decoding retrieved state (len = %v): %v", len(raw), err)
log.Errorf("decode (agreement): error decoding retrieved state (len = %v): %v", len(raw), err)
return
}

Expand Down Expand Up @@ -307,9 +307,9 @@ func (p *asyncPersistenceLoop) loop(ctx context.Context) {
// sanity check; we check it after the fact, since it's not expected to ever happen.
// performance-wise, it takes approximitly 300000ns to execute, and we don't want it to
// block the persist operation.
_, _, _, _, derr := decode(s.raw, s.clock)
_, _, _, _, derr := decode(s.raw, s.clock, p.log)
if derr != nil {
logging.Base().Errorf("could not decode own encoded disk state: %v", derr)
p.log.Errorf("could not decode own encoded disk state: %v", derr)
}
}
}
7 changes: 4 additions & 3 deletions agreement/persistence_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,8 @@ func TestAgreementSerialization(t *testing.T) {
encodedBytes := encode(clock, router, status, a)

t0 := timers.MakeMonotonicClock(time.Date(2000, 0, 0, 0, 0, 0, 0, time.UTC))
clock2, router2, status2, a2, err := decode(encodedBytes, t0)
log := makeServiceLogger(logging.Base())
clock2, router2, status2, a2, err := decode(encodedBytes, t0, log)
require.NoError(t, err)
require.Equalf(t, clock, clock2, "Clock wasn't serialized/deserialized correctly")
require.Equalf(t, router, router2, "Router wasn't serialized/deserialized correctly")
Expand Down Expand Up @@ -77,10 +78,10 @@ func BenchmarkAgreementDeserialization(b *testing.B) {

encodedBytes := encode(clock, router, status, a)
t0 := timers.MakeMonotonicClock(time.Date(2000, 0, 0, 0, 0, 0, 0, time.UTC))

log := makeServiceLogger(logging.Base())
b.ResetTimer()
for n := 0; n < b.N; n++ {
decode(encodedBytes, t0)
decode(encodedBytes, t0, log)
}
}

Expand Down
4 changes: 1 addition & 3 deletions agreement/proposalManager.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,6 @@ package agreement

import (
"fmt"

"github.com/algorand/go-algorand/logging"
)

// A proposalManager is a proposalMachine which applies relay rules to incoming
Expand Down Expand Up @@ -71,7 +69,7 @@ func (m *proposalManager) handle(r routerHandle, p player, e event) event {
r = m.handleNewPeriod(r, p, e.(thresholdEvent))
return emptyEvent{}
}
logging.Base().Panicf("proposalManager: bad event type: observed an event of type %v", e.t())
r.t.log.Panicf("proposalManager: bad event type: observed an event of type %v", e.t())
panic("not reached")
}

Expand Down
6 changes: 2 additions & 4 deletions agreement/proposalStore.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,6 @@ package agreement

import (
"fmt"

"github.com/algorand/go-algorand/logging"
)

// An blockAssembler contains the proposal data associated with some
Expand Down Expand Up @@ -289,7 +287,7 @@ func (store *proposalStore) handle(r routerHandle, p player, e event) event {
case newRound:
if len(store.Assemblers) > 1 {
// TODO this check is really an implementation invariant; move it into a whitebox test
logging.Base().Panic("too many assemblers")
r.t.log.Panic("too many assemblers")
}
for pv, ea := range store.Assemblers {
if ea.Filled {
Expand Down Expand Up @@ -347,7 +345,7 @@ func (store *proposalStore) handle(r routerHandle, p player, e event) event {
se.Payload = ea.Payload
return se
}
logging.Base().Panicf("proposalStore: bad event type: observed an event of type %v", e.t())
r.t.log.Panicf("proposalStore: bad event type: observed an event of type %v", e.t())
panic("not reached")
}

Expand Down
3 changes: 1 addition & 2 deletions agreement/proposalTracker.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ import (
"fmt"

"github.com/algorand/go-algorand/data/basics"
"github.com/algorand/go-algorand/logging"
)

// A proposalSeeker finds the vote with the lowest credential until freeze() is
Expand Down Expand Up @@ -180,7 +179,7 @@ func (t *proposalTracker) handle(r routerHandle, p player, e event) event {
return se
}

logging.Base().Panicf("proposalTracker: bad event type: observed an event of type %v", e.t())
r.t.log.Panicf("proposalTracker: bad event type: observed an event of type %v", e.t())
panic("not reached")
}

Expand Down
1 change: 0 additions & 1 deletion agreement/pseudonode.go
Original file line number Diff line number Diff line change
Expand Up @@ -481,7 +481,6 @@ func (t pseudonodeProposalsTask) execute(verifier *AsyncVoteVerifier, quit chan

payloads, votes := t.node.makeProposals(t.round, t.period, t.participation)
fields := logging.Fields{
"Context": "Agreement",
"Type": logspec.ProposalAssembled.String(),
"ObjectRound": t.round,
"ObjectPeriod": t.period,
Expand Down
4 changes: 2 additions & 2 deletions agreement/service.go
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ func MakeService(p Parameters) *Service {

s.parameters = parameters(p)

s.log = serviceLogger{Logger: p.Logger}
s.log = makeServiceLogger(p.Logger)

// GOAL2-541: tracer is not concurrency safe. It should only ever be
// accessed by main state machine loop.
Expand Down Expand Up @@ -191,7 +191,7 @@ func (s *Service) mainLoop(input <-chan externalEvent, output chan<- []action, r
var err error
raw, err := restore(s.log, s.Accessor)
if err == nil {
clock, router, status, a, err = decode(raw, s.Clock)
clock, router, status, a, err = decode(raw, s.Clock, s.log)
if err != nil {
reset(s.log, s.Accessor)
} else {
Expand Down
5 changes: 4 additions & 1 deletion agreement/trace.go
Original file line number Diff line number Diff line change
Expand Up @@ -497,9 +497,12 @@ type serviceLogger struct {
logging.Logger
}

func makeServiceLogger(log logging.Logger) serviceLogger {
return serviceLogger{log.With("Context", "Agreement")}
}

func (log serviceLogger) with(e logspec.AgreementEvent) serviceLogger {
fields := logging.Fields{
"Context": "Agreement",
"Type": e.Type.String(),
"Round": e.Round,
"Period": e.Period,
Expand Down
7 changes: 3 additions & 4 deletions agreement/voteAggregator.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ package agreement
import (
"fmt"

"github.com/algorand/go-algorand/logging"
"github.com/algorand/go-algorand/protocol"
)

Expand Down Expand Up @@ -118,7 +117,7 @@ func (agg *voteAggregator) handle(r routerHandle, pr player, em event) (res even
} else if tE.(thresholdEvent).Round == e.FreshnessData.PlayerRound+1 {
return emptyEvent{}
}
logging.Base().Panicf("bad round (%v, %v)", tE.(thresholdEvent).Round, e.FreshnessData.PlayerRound) // TODO this should be a postcondition check; move it
r.t.log.Panicf("bad round (%v, %v)", tE.(thresholdEvent).Round, e.FreshnessData.PlayerRound) // TODO this should be a postcondition check; move it

case bundlePresent:
ub := e.Input.UnauthenticatedBundle
Expand Down Expand Up @@ -180,7 +179,7 @@ func (agg *voteAggregator) handle(r routerHandle, pr player, em event) (res even
smErr := makeSerErrf("bundle for (%v, %v, %v: %v) failed to cause a significant state change", b.U.Round, b.U.Period, b.U.Step, b.U.Proposal)
return filteredEvent{T: bundleFiltered, Err: smErr}
}
logging.Base().Panicf("voteAggregator: bad event type: observed an event of type %v", e.t())
r.t.log.Panicf("voteAggregator: bad event type: observed an event of type %v", e.t())
panic("not reached")
}

Expand All @@ -200,7 +199,7 @@ func (agg *voteAggregator) filterVote(proto protocol.ConsensusVersion, p player,
case none:
return nil
}
logging.Base().Panicf("voteAggregator: bad event type: while filtering, observed an event of type %v", filterRes.t())
r.t.log.Panicf("voteAggregator: bad event type: while filtering, observed an event of type %v", filterRes.t())
panic("not reached")
}

Expand Down
8 changes: 2 additions & 6 deletions agreement/voteAuxiliary.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,6 @@

package agreement

import (
"github.com/algorand/go-algorand/logging"
)

// A voteTrackerPeriod is a voteMachinePeriod which indicates whether a
// next-threshold of votes was observed for a some value in a period.
type voteTrackerPeriod struct {
Expand Down Expand Up @@ -82,7 +78,7 @@ func (t *voteTrackerPeriod) handle(r routerHandle, p player, e event) event {
case nextThresholdStatusRequest:
return t.Cached
default:
logging.Base().Panicf("voteTrackerPeriod: bad event type: observed an event of type %v", e.t())
r.t.log.Panicf("voteTrackerPeriod: bad event type: observed an event of type %v", e.t())
panic("not reached")
}
}
Expand Down Expand Up @@ -152,7 +148,7 @@ func (t *voteTrackerRound) handle(r routerHandle, p player, e event) event {
case freshestBundleRequest:
return freshestBundleEvent{Ok: t.Ok, Event: t.Freshest}
default:
logging.Base().Panicf("voteTrackerRound: bad event type: observed an event of type %v", e.t())
r.t.log.Panicf("voteTrackerRound: bad event type: observed an event of type %v", e.t())
panic("not reached")
}
}
Loading

0 comments on commit 83bccc5

Please sign in to comment.