-
Notifications
You must be signed in to change notification settings - Fork 201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added max_kernel_size argument to core driver #1612
Conversation
Signed-off-by: Leon Riesebos <[email protected]>
If you're hitting problems when loading large kernels, I think a solution would be to increase the memory allocated to the comms CPU. It is 4MB only for historical reasons, and we no longer support boards with such a limited amount of RAM. |
Wouldn't it be more appropriate (less complex, less need for tuning parameters) to make the core device fail cleanly on kernels that are too large? (I assume the issue is that it just OOM-panics right now?) |
Increasing the memory allocated to the comms CPU sounds like a fair start. I do not think I am the right person to make that change, so I would have to ask one of you to do so. A clean fail for too large kernels would definitely be a better solution than this one. What we currently experience with too large kernels is that the Kasli crashes in an unrecoverable way and we need to power cycle it. It would be great if that does not happen and an appropriate error message is shown instead, but again I am not the person who would be able to make that change. Feel free to close this PR in favor of the two proposed solutions which seem more useful and robust. |
I assume there is a runtime panic at OOM? One of the things on my todo list is to make a controller that forwards UART messages to the main log. That would at least help debugging |
You can use |
It's not so easy because Rust did not have a good way to handle memory allocation errors - it just panics. |
Should we then just pre-allocate a buffer to write the kernel into? This would effectively hard-code the limit into the kernel CPU instead (i.e. possibly require a manual update if the memory assignment is changed), but that still seems preferable to adding a heuristic limit in a completely different place (the core device driver). We could also read the kernel into a container that supports failing allocations (by interfacing directly with the allocator) instead of |
Seems like https://doc.rust-lang.org/alloc/vec/struct.Vec.html#method.try_reserve (unstable) or |
just fyi, below you will find the uart log output when uploading a kernel that is too large.
|
See discussion in #1612.
That may work, can you write a PR? I don't think more manual assignment of addresses would be required.
Yes. |
ARTIQ Pull Request
Description of Changes
Add an option to the core driver to limit the maximum (static) kernel size before it is loaded to the core device.
I do not know if there is any interest in such a feature, feel free to close this PR if there is not.
Related Issue
A way to prevent loading kernels that are too large while waiting for #544.
Type of Changes
Steps (Choose relevant, delete irrelevant before submitting)
All Pull Requests
git commit --signoff
, see copyright).Code Changes
flake8
to check code style (follow PEP-8 style).flake8
has issues with parsing Migen/gateware code, ignore as necessary.Documentation Changes
cd doc/manual/; make html
) to ensure no errors.Git Logistics
git rebase --interactive
). Merge/squash/fixup commits that just fix or amend previous commits. Remove unintended changes & cleanup. See tutorial.git show
). Format:Licensing
See copyright & licensing for more info.
ARTIQ files that do not contain a license header are copyrighted by M-Labs Limited and are licensed under LGPLv3+.