-
Notifications
You must be signed in to change notification settings - Fork 864
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Schema for internal transactions #72
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good!
"description": "The level of nesting of an internal transaction" | ||
}, | ||
{ | ||
"name": "internal_tx_is_error", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this column have BOOLEAN type?
"description": "Gas used by the internal transaction" | ||
}, | ||
{ | ||
"name": "internal_tx_trace_id", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The internal_tx_trace_id column can have MODE: REPEATED since it's an array. Then the export job will use json output format for which BigQuery supports REPEATED columns.
"description": "Gas used by the internal transaction" | ||
}, | ||
{ | ||
"name": "internal_tx_trace_id", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we call it internal_tx_trace_address for consistency with Parity https://wiki.parity.io/JSONRPC-trace-module#trace_block?
I think we can also add |
For the export job implementation I think we should add |
@AyushyaChitransh the Issue on Github now has $400 funding #53. Would you like to continue working on it? |
@medvedev1088 I have been working on this issue using a different setup, and am facing multiple performance issues. The coding part is completed, but the migration part(using geth) has been a bigger issue. For now, I have made some progress on the performance issues related to this. Am hoping to improve the performance by this weekend. |
@AyushyaChitransh could you elaborate on the performance issues? Also are you using geth's |
While I was tracing individual transaction hashes, I was having trouble tracing internal transactions for blocks issued around DAO fork(around block numbers 2.3M). These transactions were big and trace for these blocks was slow. Geth occasionally times out with a response of:
or
Now I am going to look into subscribe functions. And see if this speeds up process. |
@AyushyaChitransh interesting. Why did you give up using Parity? |
I never used parity in the first place. When I came across this issue, my setup was preconfigured to use geth. But am inclined towards testing the performance of parity. |
@AyushyaChitransh got it. Let me know if you are interested in contributing for geth implementation. I will create and ask to fund an issue similar to this one but for geth #53.
The |
That is correct. I encountered this error for And no, I do not recommend geth. Geth is unable to trace internal transactions related to self_destruct opcode. While parity is much suited for it. I have explored very deeply using geth and have mentioned almost all points of failure. So I would like to contribute for parity implementation. |
The parity one is already taken by another dev, unfortunately #53. Regarding selfdestruct, I can see it's implemented in call_tracer.js https://github.com/karalabe/go-ethereum/blob/master/eth/tracers/internal/tracers/call_tracer.js |
No problem then. I am glad someone is already working on that. Regarding self_destruct, there is an open issue: ethereum/go-ethereum#16459 |
Schema for internal transactions #53