-
Notifications
You must be signed in to change notification settings - Fork 264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Back pressure in Process API #581
Comments
Based on a chat with @trustmaster today, the way forward would be to allow graph designers to add a Depends on #594 |
Actually, much nicer to properly implement back pressure in NoFlo than doing “load” ports. Buffer size can be handled via edge metadata (see “connection capabilities in noflo/noflo-ui#370). Quick ideas:
|
Use backpressure to prevent new input from being processed until the current conversation is done. #1042 Please review this guide, I'm not sure if ChatGPT knows your implementation well. |
There are many situations where you want to throttle component's activity level. Examples could be rate limited API calls, or amount of files you can open simultaneously.
In the legacy AsyncComponent implementation we had support for a
load
outport, where the component would send its current load. Then you could use a component like flow/Throttle in front of the component to prevent new jobs being triggered before the component has finished previous ones, like:Another concept that has been used in various FBP systems is "back pressure" where an downstream component can tell its upstream "don't send me more data".
With backpressure, one question is how the max numbers of jobs for a component would be configured on graph level. We have several mechanisms that could be used, like control inports or node/edge metadata.
The text was updated successfully, but these errors were encountered: