You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yesterday I met with @perlindgren to discuss the state of RTIC Scope now that a v0.2.0 release is approaching. The topic of how RTIC Scope associates ITM packets with RTIC tasks was discussed. Currently, in a preparatory information recovery step before the target is flashed and traced, the source code of the RTIC application is parsed so that the #[app(...)] mod app { ... } module can be extraced and forwarded to rtic_syntax::parse2. From the returned structures, the hardware tasks and their binds are read and thus all necesarry information required to associate ITM packets (relating to interrupts) to RTIC tasks have been recovered.
This approach is not stable. Among other reasons,rtic_scope is not meant to be used as a library and it has yet to reach a stable release. Using it for information recovery will be a game of catch-up which I'd like to avoid. I believe it is of interest that RTIC Scope does not succumb to entropy (too quickly) after my thesis is done later this year. This will require off-loading some work to upstream RTIC instead.
During the meeting the possibility of extracting a description of the RTIC app during compilation came up. For example, a JSON description that the tracer (RTIC Scope, or something else) catches and deserializes. This description would, for example, contain a list of all the tasks and what interrupts they are bound to. This description could be locked behind some #[rtic::app(export_json_description=true)] argument flag.
Pros
Information structures are already available. #[rtic::app] just needs to export it to JSON. An initial implementation can probably derive these structures from serde.
Less source code parsing required for tracers; easier implementations.
rtic_syntax will only be used for these serde structures unless moved to some other crate.
Cons
More to maintain in RTIC.
Possible pitfalls
For software task, an auxiliary cortex-m-rtic-trace crate can be used for its setup functions and #[trace] macro. During recovery the source code is parsed again so that these can be counted and associated unique IDs and task names. #[trace] is a simple macro that wraps the decorated function with two statements: one for when the task enters, and one for then it exits. E.g.
The above pitfall is probably not of concern: RTIC already operates with nested macros in mind.
Another question entirely regards additional metadata. Of high interest to my thesis is the monitoring of shared resources (both value and lock status), but also that of queue sizes. For resources values the in-memory address must be known. Must this address be found after building by help of build.rs?
RTIC parses the app and will see the attributes at top level. RTIC does currently NOT parse the function bodies and will not see the inner attributes. In both cases the attributes will be remained and the corresponding expansions performed. As far as I know we cannot control the order of macro expansion, so it might be that the #[trace] have already been expanded at the point where RTIC gets the token stream (this I'm not 100% sure of however). In any case to ensure that RTIC includes the #[trace] in an intelligent way I believe it needs to be part of the RTIC syntax (on par with other attributes that RTIC handles). In that case it could as well be part of the #[task] attributes. Its certainly doable and not high effort, but as everything else it adds to the complexity and I'm not sure such a change would be accepted.
Yesterday I met with @perlindgren to discuss the state of RTIC Scope now that a v0.2.0 release is approaching. The topic of how RTIC Scope associates ITM packets with RTIC tasks was discussed. Currently, in a preparatory information recovery step before the target is flashed and traced, the source code of the RTIC application is parsed so that the
#[app(...)] mod app { ... }
module can be extraced and forwarded tortic_syntax::parse2
. From the returned structures, the hardware tasks and their binds are read and thus all necesarry information required to associate ITM packets (relating to interrupts) to RTIC tasks have been recovered.This approach is not stable. Among other reasons,
rtic_scope
is not meant to be used as a library and it has yet to reach a stable release. Using it for information recovery will be a game of catch-up which I'd like to avoid. I believe it is of interest that RTIC Scope does not succumb to entropy (too quickly) after my thesis is done later this year. This will require off-loading some work to upstream RTIC instead.During the meeting the possibility of extracting a description of the RTIC app during compilation came up. For example, a JSON description that the tracer (RTIC Scope, or something else) catches and deserializes. This description would, for example, contain a list of all the tasks and what interrupts they are bound to. This description could be locked behind some
#[rtic::app(export_json_description=true)]
argument flag.Pros
#[rtic::app]
just needs to export it to JSON. An initial implementation can probably derive these structures fromserde
.rtic_syntax
will only be used for theseserde
structures unless moved to some other crate.Cons
Possible pitfalls
For software task, an auxiliary
cortex-m-rtic-trace
crate can be used for its setup functions and#[trace]
macro. During recovery the source code is parsed again so that these can be counted and associated unique IDs and task names.#[trace]
is a simple macro that wraps the decorated function with two statements: one for when the task enters, and one for then it exits. E.g.expands to
Can
#[rtic::app]
find the#[trace]
macros and record it as some "unknown" macro to the associated function and add it to the JSON description?I'll let this simmer a bit and bring it up in a later weekly RTIC meeting. Afterwards I'll draft up an RFC if we decide to go ahead with this.
Anything to amend, @perlindgren?
The text was updated successfully, but these errors were encountered: