-
Notifications
You must be signed in to change notification settings - Fork 16.7k
prometheus-server CrashLoopBackOff #15742
Comments
I have the same issue: here describe of pod:Name: prometheus-server-55479c9d54-6gh9t Normal Scheduled 47s default-scheduler Successfully assigned monitoring/prometheus-server-55479c9d54-6gh9t to phx3187268 |
Seeing a very similar type of activity on my dask-scheduler pod when implementing stable/dask in a ticket that I opened #15979 |
Here the log |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
having the same issue Events: Normal Scheduled 7m21s default-scheduler Successfully assigned monitoring/prometheus-server-75959db9-5v6dm to docker02 |
Also seeing this when simply running Helm Version: v2.14.3
|
I'm seeing the same problem. Is there a solution or workaround? |
Same problem here using
|
Tried using "server.skipTSDBLock=true". It bypasses that step, but fails in the next:
Then tried using server.persistentVolume.mountPath=/tmp as a test and it also fails:
|
I solved this problem with the below way.
from
to
honestly, I am not sure this change won't cause another problem. but now it works |
I am having the same issue - Changing securityContext does not fix it. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
This issue is being automatically closed due to inactivity. |
nice worked setting - securityContext: |
this fixed my issue also securityContext: so is there some other part of the setup/config that is expected to be done ahead of time thats missing? |
Worked for me. (using Rancher , edited prometheus-server deployment YAML file)
|
That worked for me. Thank you very much.
I did the installation via helm, and edited the values file and put these values. |
Describe the bug
A clear and concise description of what the bug is.
Version of Helm and Kubernetes:
Helm Version: v2.14.2
Kubernetes Version: v1.15.0
Which chart:
stable/prometheus
What happened:
Pod named prometheus-server-66fbdff99b-z4vbj always in CrashLoopBackOff state
What you expected to happen:
prometheus-server pod supposed to start and running
How to reproduce it (as minimally and precisely as possible):
helm install stable/prometheus --name prometheus --namespace prometheus --set server.global.scrape_interval=5s,server.global.evaluation_interval=5s
Anything else we need to know:
The text was updated successfully, but these errors were encountered: