Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gRPC outputs: fanout queuing mechanism #857

Closed
fntlnz opened this issue Sep 25, 2019 · 23 comments
Closed

gRPC outputs: fanout queuing mechanism #857

fntlnz opened this issue Sep 25, 2019 · 23 comments

Comments

@fntlnz
Copy link
Contributor

fntlnz commented Sep 25, 2019

What would you like to be added:

At the current moment, gRPC only supports a "round-robin" mechanism where all the clients are subscribed to the same queue and receiving different alerts.

                                  +--------------------+
                                  |    gRPC client     |
           +---------------------->    subscribed      |
           |                      |                    |
           |                      |    (stream queue 1)|
           |                      +--------------------+
           |
           |
+----------+-----------+
|     gRPC outputs     |          +--------------------+
|                      |          |    gRPC client     |
|                      +---------->    subscribed      |
|     subscribe()      |          |                    |
|                      |          |    (stream queue 1)|
+----------+-----------+          +--------------------+
           |
           |
           |
           |                      +--------------------+
           |                      |    gRPC client     |
           |                      |    subscribed      |
           +---------------------->                    |
                                  |    (stream queue 1)|
                                  +--------------------+

We want to add a new mechanism, "fanout" where all the clients are subscribed to different queues to receive the same alerts.

                                  +--------------------+
                                  |    gRPC client     |
           +---------------------->    subscribed      |
           |                      |                    |
           |                      |    (stream queue 1)|
           |                      +--------------------+
           |
           |
+----------+-----------+
|     gRPC outputs     |          +--------------------+
|                      |          |    gRPC client     |
|                      +---------->    subscribed      |
|     subscribe()      |          |                    |
|                      |          |    (stream queue 2)|
+----------+-----------+          +--------------------+
           |
           |
           |
           |                      +--------------------+
           |                      |    gRPC client     |
           |                      |    subscribed      |
           +---------------------->                    |
                                  |    (stream queue 3)|
                                  +--------------------+

To do this, our internal output mechanism will need to support sending the same message to multiple queues so that clients can read from different streams independently.

Why is this needed:

Some users already expressed the need to read the same alerts from multiple clients.

@leodido
Copy link
Member

leodido commented Sep 25, 2019

Connecting to #822

@krisnova krisnova added this to the 0.19.0 milestone Sep 25, 2019
@stale
Copy link

stale bot commented Nov 24, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Nov 24, 2019
@leodido
Copy link
Member

leodido commented Nov 25, 2019

Don't close this bot!

@stale stale bot removed the wontfix label Nov 25, 2019
@leodido leodido removed this from the 0.19.0 milestone Dec 20, 2019
@stale
Copy link

stale bot commented Feb 18, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Feb 18, 2020
@leodido
Copy link
Member

leodido commented Feb 18, 2020 via email

@stale stale bot removed the wontfix label Feb 18, 2020
@stale
Copy link

stale bot commented Apr 18, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Apr 18, 2020
@leodido
Copy link
Member

leodido commented Apr 18, 2020 via email

@stale
Copy link

stale bot commented Jun 17, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Jun 17, 2020
@leogr
Copy link
Member

leogr commented Jun 18, 2020

I believe we still need this
@leodido @fntlnz

@stale stale bot closed this as completed Jun 25, 2020
@leogr
Copy link
Member

leogr commented Jul 1, 2020

keep

@leogr leogr reopened this Jul 1, 2020
@stale stale bot removed the wontfix label Jul 1, 2020
@stale
Copy link

stale bot commented Aug 30, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. Issues labeled "cncf", "roadmap" and "help wanted" will not be automatically closed. Please refer to a maintainer to get such label added if you think this should be kept open.

@stale stale bot added the wontfix label Aug 30, 2020
@leogr
Copy link
Member

leogr commented Aug 31, 2020

I believe we should document the current state since it might not so obvious for newcomers.
/help

@poiana
Copy link
Contributor

poiana commented Aug 31, 2020

@leogr:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

I believe we should document the current state since it might not so obvious for newcomers.
/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@stale stale bot removed the wontfix label Aug 31, 2020
@poiana
Copy link
Contributor

poiana commented Nov 29, 2020

Issues go stale after 90d of inactivity.

Mark the issue as fresh with /remove-lifecycle stale.

Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Provide feedback via https://github.com/falcosecurity/community.

/lifecycle stale

@poiana
Copy link
Contributor

poiana commented Dec 30, 2020

Stale issues rot after 30d of inactivity.

Mark the issue as fresh with /remove-lifecycle rotten.

Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Provide feedback via https://github.com/falcosecurity/community.

/lifecycle rotten

@poiana
Copy link
Contributor

poiana commented Jan 29, 2021

Rotten issues close after 30d of inactivity.

Reopen the issue with /reopen.

Mark the issue as fresh with /remove-lifecycle rotten.

Provide feedback via https://github.com/falcosecurity/community.
/close

@poiana
Copy link
Contributor

poiana commented Jan 29, 2021

@poiana: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue with /reopen.

Mark the issue as fresh with /remove-lifecycle rotten.

Provide feedback via https://github.com/falcosecurity/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@poiana poiana closed this as completed Jan 29, 2021
@leogr
Copy link
Member

leogr commented Feb 1, 2021

/remove-lifecycle rotten
/reopen

@poiana
Copy link
Contributor

poiana commented Feb 1, 2021

@leogr: Reopened this issue.

In response to this:

/remove-lifecycle rotten
/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@poiana poiana reopened this Feb 1, 2021
@leogr leogr added this to the 1.0.0 milestone Feb 1, 2021
@poiana
Copy link
Contributor

poiana commented May 2, 2021

Issues go stale after 90d of inactivity.

Mark the issue as fresh with /remove-lifecycle stale.

Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Provide feedback via https://github.com/falcosecurity/community.

/lifecycle stale

@poiana
Copy link
Contributor

poiana commented Jun 1, 2021

Stale issues rot after 30d of inactivity.

Mark the issue as fresh with /remove-lifecycle rotten.

Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Provide feedback via https://github.com/falcosecurity/community.

/lifecycle rotten

@poiana
Copy link
Contributor

poiana commented Jul 1, 2021

Rotten issues close after 30d of inactivity.

Reopen the issue with /reopen.

Mark the issue as fresh with /remove-lifecycle rotten.

Provide feedback via https://github.com/falcosecurity/community.
/close

@poiana
Copy link
Contributor

poiana commented Jul 1, 2021

@poiana: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue with /reopen.

Mark the issue as fresh with /remove-lifecycle rotten.

Provide feedback via https://github.com/falcosecurity/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@poiana poiana closed this as completed Jul 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants