-
Notifications
You must be signed in to change notification settings - Fork 767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default password and username #589
Comments
@itsecforu just spent several hours debugging the same issue. If you have the same issue I had I can help you. Can you do a test for me and see if you are getting a 405 in your network response when you click login? (in your browser debugger) |
@thedewpoint how can i test it? do u mean chrome?I have limited rights in the working environment |
@itsecforu yea in chrome. Just open the developer tools (f12) and click on the network tab. Then try to logon and check the response code. If its 405 you have the same issue I did and I can walk you through the solution. |
I'm going away for memorial day so I'll post what I did in case you have the same issue. After installing the helm chart I (incorrectly) changed the service type of the harbor-portal from clusterip to loadbalancer to expose it. Even though I was able to access the webapp, login requests were incorrectly routed to the portal service even though they should be routed to the core service. I changed the portal service back to clusterip and then made sure it was running on port 80. This chart sets up the ingress for harbor, if you run |
I'm having this issue when deploying with an Ingress. I get a 405 from an nginx server (I use haproxy-ingress so it's definitely one from the harbor deployment). |
@thedewpoint thx for reply. I got 502 error |
.@0pendev sounds like you are having the same problem as me. Should be able to troubleshoot with my steps above. 1. Check that the domain you are accessing harbor on matches that in your ingress. 2. Check that your pods are listening on the correct ports. 3. Check the access logs and make sure logon request is being routed to core inside the core pod. @itsecforu unfortunately thats a different problem. Need more detail from logs to figure out whats going on. |
@thedewpoint Seems so, cource I used type Nodeport. I also tried to change type to loadbalancer and get that situation:
|
@itsecforu you can see that harbor core pod is down, among some other pods. they are crashed. i would look at the logs to determine why for that pod and the others. also can you inspect your ingress for harbor and post it here. |
I havent ingrees as u can see |
can you try doing kubectl get ingress |
Just read your other issue, can you get logs from the postgres container |
ingress:
Postgres seems not ok:
|
it looks ok, i have to double check but i'm pretty sure my logs look like that as well |
@itsecforu can you get the pod logs for the "core" harbor pod |
Hmm i have got 2 pods - core: 1st -nothing into logs 2nd:
|
thats problem #1, we need to figure out why it can't connect to the database. theres no database logs corresponding to that same timestamp? Also you shouldn't have 2 core containers, can you delete the one that has no logs. Make sure your service for core is correctly routing to the pod that is running. |
How to solve that #1 ? |
I notice my helm version is 2.14.3, maybe is it root of evil?
logsharbor-registry:
harbor-clair:
harbor-core:
notary-singer:
harbor-jobservice:
harbor-nginx:
harbor-notary-server: format_color_text text_fields timer_off refresh exposure_zero file_download
harbor-database:
|
Highlighted in bold for what exactly changed values.yaml:
|
Can somebody helps? :-( |
did you already try upgrading helm? this is my helm output |
I updated to 2.8 |
why not the 3.02? just so we are the same |
I can’t update this way, I need to install version 3 in parallel |
the same Readiness probe failed with Helm 3.0.2 :-( |
From my database pod:
May be do I need to start DB into pod? |
I was triying install with: and:
I collected a minimal file of values:
but the situation is similar Readiness probe failed: on many pods. Help folks! :-( |
same root cause as #585 , the network is not working as expected in your k8s cluster and the name of the service can't be resolved. |
helm(2.17) wish same problem , type nodePort, persistence enable(false) |
Same issue Add repo Get Logs
Logs
Tri login with login password - admin:admin |
I'm facing this issue and --set harborAdminPassword=admin is not helping me :( |
Same issue
❯ kubectl --kubeconfig /var/tmp/default.kubeconfig get pod
NAME READY STATUS RESTARTS AGE
my-release-harbor-chartmuseum-59c694665-sgbnk 1/1 Running 0 7m33s
my-release-harbor-core-565b9db589-c52qw 1/1 Running 0 7m33s
my-release-harbor-database-0 1/1 Running 0 7m33s
my-release-harbor-jobservice-588578f86f-2l6dx 1/1 Running 0 7m33s
my-release-harbor-nginx-6cbcdbd4db-dp9qt 1/1 Running 0 7m33s
my-release-harbor-notary-server-6bc6b9bfbf-2bww2 1/1 Running 0 7m33s
my-release-harbor-notary-signer-5bf68b9455-n5sfn 1/1 Running 0 7m33s
my-release-harbor-portal-7fb85d5598-fpvgk 1/1 Running 0 7m33s
my-release-harbor-redis-0 1/1 Running 0 7m33s
my-release-harbor-registry-67d799947-h94b9 2/2 Running 0 7m33s
my-release-harbor-trivy-0 1/1 Running 0 7m33s
❯ kubectl --kubeconfig /var/tmp/default.kubeconfig get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
harbor NodePort 10.97.236.126 <none> 80:30002/TCP,4443:30004/TCP 7m34s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 38m
my-release-harbor-chartmuseum ClusterIP 10.97.32.163 <none> 80/TCP 7m35s
my-release-harbor-core ClusterIP 10.98.69.192 <none> 80/TCP 7m35s
my-release-harbor-database ClusterIP 10.110.48.139 <none> 5432/TCP 7m35s
my-release-harbor-jobservice ClusterIP 10.111.175.111 <none> 80/TCP 7m35s
my-release-harbor-notary-server ClusterIP 10.96.75.175 <none> 4443/TCP 7m34s
my-release-harbor-notary-signer ClusterIP 10.109.127.153 <none> 7899/TCP 7m34s
my-release-harbor-portal ClusterIP 10.104.87.126 <none> 80/TCP 7m34s
my-release-harbor-redis ClusterIP 10.96.229.212 <none> 6379/TCP 7m34s
my-release-harbor-registry ClusterIP 10.104.250.250 <none> 5000/TCP,8080/TCP 7m34s
my-release-harbor-trivy ClusterIP 10.103.124.203 <none> 8080/TCP 7m34s
use admin/Harbor12345 |
I actually found that when you disable TLS in harbor helm it happens |
Hi there, faced the same credential issue and answered here: #485 (comment) Hope it helps |
The default password is |
Same problem with Kubernetes 1.22.8 on Digital Ocean managed k8s. I installed the Helm chart for Harbor (not the Bitnami one), used |
This also happens when I enable TLS. |
In my case the problem came from externalURL which was not filled in (service in clusterIP).
To know that for my part harbor is behind traefik via an ingressRoute. |
I had it solved by port-forwarding the nginx pod and not the harbor-portal service or pod:
|
For me this comment helped me to fix it: #589 (comment) |
Can someone please help? I'm also facing the same issue but none of the above solutions worked. I want to deploy harbor using NodePort as follows: However I'm unable to login at :30087 using admin and Harbor12345. EDIT: OK, I fixed it. Thought I should let everyone know you also need to add the protocol in the externalURL value: |
Hello everyone. Unable to log in with standard credentials.
Which pod should I describe?
Where can I find error information?
The text was updated successfully, but these errors were encountered: