Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support sync_send_request in Service Client #1533

Closed
fujitatomoya opened this issue Jan 28, 2021 · 14 comments
Closed

Support sync_send_request in Service Client #1533

fujitatomoya opened this issue Jan 28, 2021 · 14 comments
Assignees
Labels
enhancement New feature or request

Comments

@fujitatomoya
Copy link
Collaborator

Feature request

Feature description

Currently service client does not support synchronous send_request, but only asynchronous. thinking of affinity for ROS1 and SyncParametersClient, it would be nice to support sync_send_request.

https://github.com/ros2/demos/blob/11e00ecf7eec25320f950227531119940496d615/demo_nodes_cpp/src/services/add_two_ints_client.cpp#L24, this is actually tagged with TODO

Note

https://discourse.ros.org/t/my-ros2-minimal-examples-difficulties-coming-from-ros1/18705

@fujitatomoya fujitatomoya added the enhancement New feature or request label Jan 28, 2021
@ros-discourse
Copy link

This issue has been mentioned on ROS Discourse. There might be relevant details there:

https://discourse.ros.org/t/my-ros2-minimal-examples-difficulties-coming-from-ros1/18705/4

@clalancette
Copy link
Contributor

I've said this before elsewhere, but I think this is actually a bad idea. It is just way too easy for a user to misuse. You can see a full thread of my thoughts on it over at ros2/ros2_documentation#901 (comment) .

Additionally, if a user really knows what they are doing, and really wants it to be synchronous, it is trivial to do so with a future.wait().

Thus, my perspective is that we should not add it to rclcpp, and actually should go further and remove it from rclpy. But I'll leave this open for now if others want to chime in.

@fujitatomoya
Copy link
Collaborator Author

@clalancette thanks for the guide 👍 it makes sense and i am with your consideration for this. i will keep this open for a week, if nothing comes up, i will close this.

@fujitatomoya fujitatomoya self-assigned this Jan 29, 2021
@matemat13
Copy link

@clalancette Why not give the user as many useful tools as he might need and let him decide what he wants instead of making decisions for him and assuming that he's too stupid to use the provided tools properly? This is not Java after all, this is C++, so why not stick with C++'s philosophy?

In the other thread that you mention (if I'm reading it correctly) you mostly argue that calling a service synchronously from a callback could block the callback and "starve" the system. Well so does calling e.g. sleep. Should sleep usage then be prohibited/disabled? Maybe a better way would be documenting the provided tools well and noting potential bad usage in the docs and examples, instead of taking functionality away from the user.

TL;DR: please give us send_request() in Service Client that is synchronous and let us decide if and how we want to use it :D

@clalancette
Copy link
Contributor

In the other thread that you mention (if I'm reading it correctly) you mostly argue that calling a service synchronously from a callback could block the callback and "starve" the system.

That's not the current problem. The current problem is that if you call a synchronous service from within a callback you deadlock forever. My further argument then was that even if we could fix that (and it is unclear how we would do it), then you have the problem of inducing latency in downstream callbacks in some subtle ways.

My general philosophy is that we should not add APIs that are very easy to misuse. That is the case with synchronous callbacks in my opinion. If you really know what you are doing, there are already ways that you can call synchronous callbacks. As I pointed out above, it is a one-liner to do future.wait_for(), and then you have to deal with how to make the threading work. I have some examples on how to make the threading work in https://github.com/clalancette/executor_examples .

So in short, you already have all of the tools you need to do synchronous callbacks. Adding it to the main API is just syntactic sugar, and in my opinion, dangerous syntactic sugar. So that's why I don't think we should add it.

@klaxalk
Copy link

klaxalk commented Jan 29, 2021

Hey @clalancette, I would gladly use the future.wait_for(), however, it seems to block the whole program from finishing the service call. Maybe I am doing something wrong there. I made a minimalistic example here (one of the examples that spawned this issue) with other alternatives on how to make the mechanism synchronous. E.g., some demos show the rclcpp::spin_until_future_complete(...) which does not work in components.

We would appreciate it if you could help us with this minimalistic example. We can for sure make the synchronicity ourselves if it is doable. But I do agree with @matemat13 that a basic functionality for this would be welcomed. It is, after all, the default and only behavior in ROS1 and I bet many people will ask for it when they transfer.

@matemat13
Copy link

@clalancette

The current problem is that if you call a synchronous service from within a callback you deadlock forever.

OK, I did not catch this information from that thread (admittedly, I've only skimmed through it). So are you talking about a situation where you only have a single thread processing callbacks in the ROS2 program? Then I can see how blocking this thread until a callback is processed can be a problem (since there is no free thread to process it).

However, although now I can better see your restraints regarding implementing a blocking send_request(), this is still only a problem in case of a single threaded node/nodelet, right? And you can achieve a similar problem with e.g. std::condition_variable in a single threaded program and yet surely you wouldn't want to remove it from the C++ standard library, right?

Or am I still missing something? If so, then I apologize - my brain still lives in the ROS1 world :D

@wjwwood
Copy link
Member

wjwwood commented Jan 30, 2021

So are you talking about a situation where you only have a single thread processing callbacks in the ROS2 program? Then I can see how blocking this thread until a callback is processed can be a problem (since there is no free thread to process it).

Right, in a single threaded executor it will deadlock because you need the future to be completed, which will not happen if you're not spinning, which you cannot do from within a callback that was executed as the result of spinning (e.g. another service callback, a subscription callback, or a timer callback).

A multi-threaded executor could allow you to wait for the future (the one created by an async send request call) to be completed while in another thread of the executor the work that needs to be done to complete that future is done, but in the meantime you're consuming one entire thread of a usually very finite threadpool and you can still get into deadlock if you fill all available threads with code waiting on service request futures, leaving none to do the work needed to complete the futures.

You can say, well that's my own problem let me do it, but the thing is... you can. The kind of thing you're asking for is like this:

void sync_send_request(request) {
  auto future = async_send_request(request);
  return future.get();
}

But @clalancette's point (and I totally agree with him) is that in our opinion most cases this will just not work, because the user is calling from a subscription or timer callback and either have no control over the executor or are using a single-threaded executor by default.

So we'd just be replacing this issue with other user issues reporting that send_request doesn't work.


I know there's a lot to read in order to be up to speed on this issue, but we've discussed it in length already (including these points) here:

ros2/ros2_documentation#901 (comment)

I think the most likely solution (and the one that looks most like other asynchronous C++ frameworks) is this style:

void
send_request(
  RequestT request,
  std::functional<void (ResponseT)> on_success,
  std::functional<void (FailureReasonT)> on_failure);

And calling it might look something like this:

void
my_on_timer_callback()
{
  // ... do some stuff to prepare the request
  my_client->send_request(
    request,
    [](auto response) {
      // use the response, call another method, whatever...
    },
    [](auto failure_reason) {
      // conditional failure logic
      RCLCPP_ERROR(this->get_logger(), "failure message ...");
    });
}

This style fixes the resource hogging issues and deadlock with single threaded issue, by returning control the system once the request is made and only consuming execution resources again once the response is received or something happened like a timeout.

And as I spoke about in that other thread, doing better than that would require some language assistance, like async/await frameworks in other languages like Python's asyncio or Rust's async/await syntax.

@stertingen
Copy link

And as I spoke about in that other thread, doing better than that would require some language assistance, like async/await frameworks in other languages like Python's asyncio or Rust's async/await syntax.

What about C++20 coroutines? (https://en.cppreference.com/w/cpp/language/coroutines)

@UsatiyNyan
Copy link

And as I spoke about in that other thread, doing better than that would require some language assistance, like async/await frameworks in other languages like Python's asyncio or Rust's async/await syntax.

What about C++20 coroutines? (https://en.cppreference.com/w/cpp/language/coroutines)

I also thought about that and made a lot of research by myself and got something pretty similar to what is in rclpy working in C++. Meaning I have some kind of task pool with call stacks of each async task. It is promising to be compatible with current Executor implementation but still needs work. If you are interested I can share my repo next week.

@wjwwood
Copy link
Member

wjwwood commented Feb 2, 2021

The new coroutines could be something we use in the future, but we only support C++17 right now.

Also, many of these frameworks require that you use their event loop to wait for events, and since we have to have rmw_wait which under the hood will call something like DDS's wait function, it's not possible for us to integrate this with existing event loops most of the time. Sometimes these are called event loops (Python's asyncio) or io_service (Boost ASIO) or executor or awaiter, etc...

So even when the language provides this mechanism integrating it isn't usually trivial and would require the user to ensure a specific kind of event loop based on our rmw_wait function is used.

An example of this is that in Python, for asyncio.subprocess to work on Windows you need to ensure the event loop's type is asyncio.ProactorEventLoop, which is based on a Windows specific event loop mechanism. Implementing these is usually not trivial...

Anyway, it's a maybe, but we don't even support C++20 yet, so it's at least one or two versions down the road.

@dimaqq
Copy link

dimaqq commented Jan 17, 2022

I too am interested in C++20 coroutines.
Coming from Python world, async/await seems like a perfect match for ROS2 (ROS2.1?) runtime.
Re event loop/reactor/executor: how about picking a good one of the existing?

@ros-discourse
Copy link

This issue has been mentioned on ROS Discourse. There might be relevant details there:

https://discourse.ros.org/t/deferrable-canceleable-lifecycle-transitions/32318/3

@fujitatomoya
Copy link
Collaborator Author

besides the discussion above, we move away from synchronous API to avoid possible deadlock via callback. #1533 (comment) summarizes the point well, i will go ahead to close this one for now.

for new coroutines feature request, probably it would be nice to have another issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

9 participants