-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rabbitmq trigger with invalid queueName creates connections that persist #6283
Comments
Hello |
|
Interesting... In theory, on each scaler failure KEDA calls to close the scaler before refreshing it, and it closes all the connections: keda/pkg/scalers/rabbitmq_scaler.go Lines 522 to 534 in baec715
Are you willing to take a look? |
I think the issue is in this piece of code:
This gives an error indicating that the queue does not exist. If the queue does not exist, the error is returned immediately and the Close() method is not called. It looks like the connection will indeed remain open. A new connection is created on the next attempt. This explains, i think, why the number of connections continues to grow when a non-existent queue is configured. |
The call to keda/pkg/scaling/cache/scalers_cache.go Lines 125 to 142 in 9980181
Specifically, it's |
I still observe this behavior in more than one cluster. I have been trying to create better steps to reproduce the problem, but I haven't been able up to now. I will post them when I manage. |
I managed to locally get connections that stay open. The way I did it is not the same as what we have in our kubernetes cluster, though the issue could be related. Anyway, if I use keda v2.16.0 released some hours ago, the same issue doesn't seem to happen. So could be that this is already fixed. I will close this issue. Just to have as reference, how I reproduced locally was:
# Cluster
kind create cluster --name keda-cluster --image kindest/node:v1.30.0
# RabbitMQ
echo 'apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:3.12.10-management
ports:
- containerPort: 5672
- containerPort: 15672
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
namespace: default
spec:
ports:
- name: amqp
port: 5672
targetPort: 5672
- name: management
port: 15672
targetPort: 15672
selector:
app: rabbitmq
' > rabbitmq-deployment.yaml
kubectl apply -f rabbitmq-deployment.yaml
kubectl port-forward -n default $(kubectl get pods -n default | grep rabbitmq | awk '{print $1}') 15672:15672 &
# Deployment
echo 'apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: busybox
command: ["sh", "-c", "sleep 3600"]
' > my-deployment.yaml
kubectl apply -f my-deployment.yaml
# KEDA
kubectl create namespace keda
helm repo add kedacore https://kedacore.github.io/charts
helm repo update
helm install keda kedacore/keda --namespace keda --version 2.14.0
echo "apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: rabbitmq-scaledobject
namespace: default
spec:
scaleTargetRef:
name: my-deployment
pollingInterval: 30
cooldownPeriod: 180
minReplicaCount: 2
maxReplicaCount: 5
triggers:
- type: rabbitmq
metadata:
protocol: amqp
queueName: queue-that-does-not-exist
mode: QueueLength
value: \"20\"
host: amqp://guest:[email protected]:5672/
" > scaledobject.yaml
kubectl apply -f scaledobject.yaml
Let me know if what I did is overkill, and there are easier ways to reproduce bugs. |
Report
When a scaledObject is created with rabbitmq trigger and queueName pointing to a queue that doesn't exist in that rabbitmq, the keda-operator pod increasingly creates new connections that persist. Eventually a limit can be reached and new connections to rabbitmq are rejected.
Expected Behavior
When a rabbitmq queue doesn't exist, the scaledObject fail and close any opened connections to rabbitmq.
Actual Behavior
The scalerObject fails periodically, and the number of connections to rabbitmq keep increasing until creating new connections fail.
Steps to Reproduce the Problem
Logs from KEDA operator
KEDA Version
2.14.0
Kubernetes Version
1.30
Platform
Microsoft Azure
Scaler Details
rabbitmq
Anything else?
No response
The text was updated successfully, but these errors were encountered: