Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add to agent exponential backoff for fetching new tasks #3206

Closed
wants to merge 9 commits into from

Conversation

6543
Copy link
Member

@6543 6543 commented Jan 15, 2024

other wise we DOS the server


Sponsored by Kithara Software GmbH

@6543 6543 added agent enhancement improve existing features labels Jan 15, 2024
@@ -59,10 +98,10 @@ func (r *Runner) Run(runnerCtx context.Context) error {
// get the next workflow from the queue
work, err := r.client.Next(runnerCtx, r.filter)
Copy link
Member

@anbraten anbraten Jan 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The server is blocking Next requests until it has a job, so there should be no need to backoff here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well it does not ...

}
}

func (r *Runner) Run(runnerCtx context.Context) error {
retry := backoff.NewExponentialBackOff()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you want to backoff exponential instead of linear here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because if you fetched a task last time its more likely that there are some next to come

Else in the worst case wait 10s

@6543
Copy link
Member Author

6543 commented Jan 15, 2024

$ du -h agent_docker-wsl2.log
605G    agent_docker-wsl2.log

We also had a 1.2Tb log an other day ...

It just did not kill the whole server as we had ~2Tb of free storage before

@6543
Copy link
Member Author

6543 commented Jan 15, 2024

With this in the worstcase we have ~1.7Mb per day

@anbraten
Copy link
Member

With this in the worstcase we have ~1.7Mb per day

Did you enabled debug logs?

@6543
Copy link
Member Author

6543 commented Jan 15, 2024

no ... that would be no bug then

image

@6543
Copy link
Member Author

6543 commented Jan 15, 2024

☝️ the mentioned bug should already be fixed ... so that's why i labled it enhancement not bug

@6543
Copy link
Member Author

6543 commented Jan 15, 2024

and no the server should and do not block the agent on a next() call.

so just enable debug and then disable the agent on the server ... you can see that the agent trys to fetch in a loop nonstop.
why should we waist cpu resources by query if we can have a backoff

@anbraten anbraten mentioned this pull request Feb 12, 2024
@qwerty287
Copy link
Contributor

Superseded by #3378

@qwerty287 qwerty287 closed this Feb 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
agent enhancement improve existing features
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants