-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logging Subsystem Enhancements #6186
Comments
@ccollins476ad FYI, for 1.12 |
CC @pizi-nordic |
As part of this cleanup can we look at supporting an API that allows the user to do something like: LOG(level, log_format, ...) and have features like COLOR work consistently. |
I would like to present a logger design which I think might be suitable for the Zephyr. I will post my proposal in parts, starting from the high-level overview. It will be later followed by some notes about the implementation and advanced features like multi-core support and Trust Zone integration. IntroductionThe complexity of modern embedded systems enforces changes in our working methodology. Challenges we face every day include:
These challenges faced every day by a developer, tester or system integrator could be addressed by dedicated logger subsystem, presented in this document. RequirementsThe core requirements of the logger are defined by its usability:
Operation PrinciplesLogger FrontendThe Logger Frontend collects logs form the system. From the developer perspective it mimics printf() interface. Dedicated macro (one for each log level) accepts format string and optional arguments:
The most important difference between the printf()-based solutions and the proposed logger is the processing of data. The macros presented above do not perform any processing. Instead they just store the pointer to the format string and optional arguments in the dedicated buffer. As result the cost of message logging is equal to few memory writes. The stored data in converted to to the human-readable string later, in the idle loop of the system in order to avoid negative impact of increased CPU load on the system operation. The proposed behaviour has one limitation, which must be taken under consideration by every developer using the logger: as processing is performed in the idle loop, some time after macro execution, thus all string pointers provided to the log system have to be valid all the time. In addition to the data provided by developer, the macros automatically add some metadata, like timestamps, identifier of module generating logs as well as log level. All the information is stored into a buffer (the circular buffer is an obvious choice, however it has some drawbacks which will be discussed later) where it waits for the further processing. Logger CoreThe Logger Core is responsible for management of log processing. As the log processing must have minimal impact on the system, the Logger Core is called directly from the idle thread and can be pre-empted any time. Moreover, to minimize latency in systems when no preemption is used, it processes single log message on each call, allowing system to check if there is a more important task to handle as often as possible. On each call, single log entry is fetched from the buffer in which it was stored by the frontend and is passed to connected backends. Logger BackendThe backend is responsible for log messages presentation, transmission or storage. Usually, at the first stage of processing, backend converts the data received from logger core into a human-readable from: the printf()-like expression is evaluated and converted to string, which is prepended by metadata and terminated by end of line string. Additional presentation tasks, like for example applying colours to warning and error messages, are also performed at this stage. Then, the generated strings are sent to the host:
The backend could also perform any other action on the received logs. This include storing them in flash or forwarding them to host without rendering. Basic FeaturesLog FilteringThe log filtering is done separately for each backend basing on log level and source of the message (module/instance id). Such approach allows for debugging one backend using another one as well as forwarding selected messages (like errors and warnings) to dedicated backend (for example a flash-based storage). The log filtering is done in 2 places in order to minimize amount of CPU time needed to collect and process logs. The coarse check is done in the Logger Frontend: the LOG_* macros does not perform any action if there is no backend waiting for a messages of given level from given module. If the message pass this check, the Logger Core do the final filtering by routing the messages only to backends requesting the such information. Object DumpingThe delayed processing of the log messages might be a problem when a developer would like to log some strings (in general: any data) which is only available at the time of LOG_* macro execution. To address this problem, an object dumping interface is provided. The developer might use dedicated macro (LOG_HEXDUMP()) to insert blob of data into log buffer. Such blob is then presented to the user as a hexdump of stored data. Unfortunately this feature has an significant cost, as the log buffer must be large enough to hold whole dumped object. Similar mechanism might be used as a compatibility layer in order to provide equivalent of the existing printk() interface. Panic ModeAs proposed logger buffers the messages, an implicit flush is required when unrecoverable errors is detected. Also, some backends (for example a network based ones) usually are not able to work properly after critical error . In order to allow logging during faults, a panic mode is introduced. When Logger is informed about system fault it:
|
Would you consider trying to shorten the API names a bit? E.g. LOG_WARN, LOG_INFO, LOG_ERR and LOG_DBG? Part of the parameter list is often a fairly long string, and having to split it up something to be avoided. |
@jhedberg: Good idea! |
I think it would be nice if Zephyr could also provide macros that mimic the |
@skordal: Of course the *printf() interface could be provided, but in my opinion we should allow that only during a transition period, as there are essential differences between logger and the behaviour expected from *printf() calls. The proposed logger is lossy: If there is no time to process logs, they will be dropped in order to maintain system operation. Such loss is not expected in printf(). Moreover direct usage of printf() is more costly, as you are loosing benefit of the delayed log processing. |
That's fine, what I mean is, it could be useful to have the possibility to pass a va_list to the logging functions, to make it possible to forward logging messages from other libraries/frameworks (such as libmetal) which implement their own logging frameworks. The problem here is that I want to forward logging messages from a function, defined as If possible, it would be nice to have some additional logging macros which allows us to call the Zephyr logging API in such situations, to remove the overhead of having to format the log messages in the framework before passing the strings on to the Zephyr logging API. |
@skordal: I see your problem now. The |
As principles of the logger seems to be accepted by everyone, it is time to discuss some implementation details: Implementation considerationsThe data structures used in the Logger Core are the most critical part of the design as they have direct impact on the key logger parameters (especially performance). At the first look, a plain circular buffer satisfies all needs: saving data in the buffer is fast, the data blocks could have different size and memory usage is very efficient. However there are some corner cases which require complex handling when this type of log storage is used. The first, obvious limitation of the plain circular buffer is a requirement of memory copying shown on the picture above. The data must be copied out from the circular buffer and placed into dedicated backed queues. If the same message is directed to several backends, the same data is copied multiple times, occupying significant amount of the memory. The other problem, presented on the picture below is visible when logging is performed from code running on different priority levels: In the example presented above a thread producing log entries is preempted by an interrupt which also logs some messages. The preemption occurred when the thread was writing data to the Log buffer, making Log Entry #3 incomplete. As result the Log Entry #3 is stuck in the buffer until execution of interrupted thread is not resumed, which have several implications:
All the issues mentioned above could be resolved by using a slight modified data structures presented below: In the proposed approach a fixed size log entries are allocated from the dedicated pool, filled in and then placed into circular buffer, eliminating stuck entry issue described earlier. As the entries are fixed size, the buffer pool might use efficient block allocator (like mem_slab) to handle most common cases. In order to satisfy larger requests, like data dumping, the entries could be chained together, as seen on the figure above. Further data handling also requires minimal overhead because the log entries could be shared by multiple backends due to reference counter present in the log entry header. After processing the backends are decrementing the reference counter and the log entry returns to the pool when it is no longer used. |
I think it's worth mentioning the limitations of this approach:
|
closing, need to track any enhancements individually. |
@nashif @carlescufi What is the state of the FCB backend? Some months ago I subled on someone who had a PR for this but lost track of it. We can implement this feature if you are willing to accept a PR. |
The text was updated successfully, but these errors were encountered: