-
Notifications
You must be signed in to change notification settings - Fork 372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
shared memory with iceoryx appears to use non-compatible topics #1326
Comments
@Aposhian What happens currently when SHM is enabled (also with
When using SHM the history capacity for the iceoryx publisher is configured as follows:
From the log output, it seems that the publisher is using transient local durability and because of this you see 1 as the history capacity. I am not sure where 100 is coming from, probably this is the history depth when creating the I think the default number of chunks allowed to hold simultaneously is 256, if the application is holding more than these samples, then you get the |
Thank you for explaining those points. And I am excited to learn that dynamically sized types are supported. Maybe this section should be updated in the docs?
Could you help me understand what it means to hold chunks? Also, here is the full output of |
@Aposhian I did not look at the details of your report yet. As for holding a chunk when using shared memory. When a sample is sent the following happens.
Now the reader history cache owns the sample and will keep it until it is taken by the consumer client or it is evicted for more recent samples (depends on reader history cache size). Note that when the sample is taken by the user it will in some cases perform a copy and return the chunk or it may hold on to the chunk depending on the API (which is now owned by the client). Once the client is done with the sample, it has to return it with Now for technical reasons iceoryx has a limit on shared memory chunks the client can get from one reader of a particular topic which is currently 256. If we try to get an more then we get an internal error in the log (unfortunately not very visible) that the particular subscriber cannot provide more chunks ( The reason for this limit is artificial: it limits the problems if a consumer takes samples but does not return them, which would We would like to get rid of this limitation or at least mitigate it somewhat but it is not trivial with the current design. Let me know whether this explains some of the problems. I think the first step is to make iceoryx configuration more transparent, but this has downsides as well. Ideally a middleware user should not be concerned about iceoryx if he wants to use shared memory (which is already not the case as she needs a memory configuration and RouDi). |
Does this mean that non-fixed size data can be transferred using iceoryx (zero copy)? |
Thanks @sumanth-nirmal and @MatthiasKillat for checking this:
Somehow the 256 sample limit escaped my attention and I would have taken me a lot more time to find out that detail than it took you. What does worry me in this particular case is that I would not expect:
@Aposhian, any chance you could find out whether there are a great many parameter updates and/or they take forever to be applied?
Non-fixed size data can be transferred via iceoryx, but, as @sumanth-nirmal remarked, it won't be zero-copy. The fixed-size requirement comes arises from the mapping of sequences and strings to C++:
Cyclone is built to support alternative representations, but we currently don't have support for transparently shovelling protobuf data around. If you're willing to abuse interfaces a bit and can tolerate a |
@eboasson Thanks, I have tried to write test code for protobuf over dds, in fact, zero copy support for non-fixed size data may be more important |
Looking at the |
EDIT: Nav2 does not set I now see that the messages received were for parameters that were declared inside in the nodes, but weren't present inside the YAML. |
@eboasson what that means is that this example should be typical of a large ROS2 system. Large numbers of publishes to |
Related: ros2/rclcpp#1970 |
This thread has significantly branched from its original purpose, with quite some details and maybe-problems that are not all in Cyclone. Therefore you should not interpret this close as a "everything is fine" but as a "this ticket cannot be dealt with in this state". Please open separate issues for actual bugs and questions about Cyclone DDS. |
Looking at the output of
iox-introspection-client
, it looks like iceoryx is trying to transfer topics that are not of fixed width, or have non-compatible history depths. Primarily:/parameter_events
.Here is the reproduction https://github.com/fireflyautomatix/nav2-compose/tree/iceoryx
I see the following output:
And then I can look at the introspection client and I see this under publishers:
and this under subscribers:
But shouldn't iceoryx ignore
parameter_events
which is of a non fixed-size type?Also says that a history depth of 100 is being requested, but it is actually 1000 on parameter events. Unless that log print is for another topic? And why does it say its capacity is 1, when the default for publishers is 16 (https://github.com/eclipse-iceoryx/iceoryx/blob/master/iceoryx_posh/cmake/IceoryxPoshDeployment.cmake#L56)?
I'm posting this here since I am using Eclipse CycloneDDS from ros2, and I'm assuming that selecting candidate topics for shared mem belongs with CycloneDDS, but maybe I'm incorrect.
The text was updated successfully, but these errors were encountered: