Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check that clients for all scalers are correctly closed #1096

Closed
zroubalik opened this issue Sep 7, 2020 · 5 comments · Fixed by #1129
Closed

Check that clients for all scalers are correctly closed #1096

zroubalik opened this issue Sep 7, 2020 · 5 comments · Fixed by #1129
Labels

Comments

@zroubalik
Copy link
Member

zroubalik commented Sep 7, 2020

Check all scalers and inspect the parts of the code where there is the client connecting to the target event source. It is usually in GetMetrics(). Check that the client opened to get the metrics and data is then properly closed, either in the same part of the code where it has been opened or in theClose() (this is preferred) call.

For reference check this PR for GCP PubSub that is closing a client: #1087

As part of the issue, we should check all scalers and confirm that the clients are properly closed, if there are some that needs to be closed a PR with the fix should be opened.

@aman-bansal
Copy link
Contributor

Hi @zroubalik, I would like to take this issue. This seems an interesting issue and a good place to start with development. Secondly while looking at the code, I found other possible bug areas related to clients. For example, in keda/pkg/scalers/artemis_scaler.go:166 we are creating a new http.Client again and again. This is not a standard practice. As per thehttp.Clientdocumentation itself, The Client's Transport typically has internal state (cached TCP connections), so Clients should be reused instead of created as needed. Clients are safe for concurrent use by multiple goroutines.. This problem seems to have migrated to all scalers clients. I would like to take this also. Since I am new to this codebase, I might be missing a thing or two. Do let me know If I am moving in the wrong direction.

@zroubalik
Copy link
Member Author

zroubalik commented Sep 9, 2020

@aman-bansal hi, thanks, that's great. Yeah I am aware of the problem of recreating the clients and that it is not ideal. But as you have mentioned, this applies to all scalers and it was originally designed this way. To change this, a bigger code change and broader discussion is required as it is not that simple.

I'd recommend to split those efforts into a separate issues. So please go ahead with this issue and I will create another one, where we can discuss the next approach.

@aman-bansal
Copy link
Contributor

Sounds good and reasonable. Thank you so much 👍 . I will start right away.

@aman-bansal
Copy link
Contributor

Hi @zroubalik, I have raised the PR. There were two scalers whose client connections were not closed properly. I have fixed that. Keeping things consistent I have take inspiration from other implementations of scaler's like redis_streams_scaler. This PR along with #1087 should fix scaler's connection close issue. Please review it once.

@redyamsirisha
Copy link

how we can trigger a single workflow after all other workflows are completed in github actions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants