Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The relationship between components #585

Closed
itsecforu opened this issue May 14, 2020 · 2 comments
Closed

The relationship between components #585

itsecforu opened this issue May 14, 2020 · 2 comments
Assignees

Comments

@itsecforu
Copy link

itsecforu commented May 14, 2020

Hello everyone!

I got a problem between the components of your tool.

As far as I see, there is a circulation between several pods, which ends unsuccessfully.

Helm status:


helm status harbor-registry
LAST DEPLOYED: Wed May  6 15:20:14 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                                DATA  AGE
harbor-registry-harbor-chartmuseum  23    7d19h
harbor-registry-harbor-core         41    7d19h
harbor-registry-harbor-jobservice   1     7d19h
harbor-registry-harbor-nginx        1     7d19h
harbor-registry-harbor-registry     2     7d19h

==> v1/Deployment
NAME                                  READY  UP-TO-DATE  AVAILABLE  AGE
harbor-registry-harbor-chartmuseum    1/1    1           1          7d19h
harbor-registry-harbor-clair          0/1    1           0          7d19h
harbor-registry-harbor-core           0/1    1           0          7d19h
harbor-registry-harbor-jobservice     0/1    1           0          7d19h
harbor-registry-harbor-nginx          0/1    1           0          7d19h
harbor-registry-harbor-notary-server  1/1    1           1          7d19h
harbor-registry-harbor-notary-signer  1/1    1           1          7d19h
harbor-registry-harbor-portal         1/1    1           1          7d19h
harbor-registry-harbor-registry       1/1    1           1          7d19h

==> v1/PersistentVolumeClaim
NAME                                STATUS  VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS   AGE
harbor-registry-harbor-chartmuseum  Bound   pv8     10Gi      RWO           local-storage  7d19h
harbor-registry-harbor-jobservice   Bound   pv5     10Gi      RWO           local-storage  7d19h
harbor-registry-harbor-registry     Bound   pv9     10Gi      RWO           local-storage  7d19h

==> v1/Pod(related)
NAME                                                   READY  STATUS            RESTARTS  AGE
harbor-registry-harbor-chartmuseum-74f66599c8-rg6gl    1/1    Running           0         7d19h
harbor-registry-harbor-clair-d96cdfc94-xbtb2           0/2    CrashLoopBackOff  3751      7d19h
harbor-registry-harbor-core-78cf9569d5-6ljdx           0/1    CrashLoopBackOff  1768      7d19h
harbor-registry-harbor-database-0                      1/1    Running           0         7d19h
harbor-registry-harbor-jobservice-6b9f4fbb66-mtpws     0/1    CrashLoopBackOff  1282      7d19h
harbor-registry-harbor-nginx-6bf7b7f7df-444ng          0/1    CrashLoopBackOff  2065      7d19h
harbor-registry-harbor-notary-server-66f857b4c4-flss2  1/1    Running           0         7d19h
harbor-registry-harbor-notary-signer-77c7977bc6-8wvj4  1/1    Running           0         7d19h
harbor-registry-harbor-portal-77c5877b6f-cfw42         1/1    Running           0         7d19h
harbor-registry-harbor-redis-0                         0/1    Running           1873      7d19h
harbor-registry-harbor-registry-566c7d9fc9-rwlnc       2/2    Running           0         7d19h

==> v1/Secret
NAME                                  TYPE    DATA  AGE
harbor-registry-harbor-chartmuseum    Opaque  1     7d19h
harbor-registry-harbor-clair          Opaque  3     7d19h
harbor-registry-harbor-core           Opaque  7     7d19h
harbor-registry-harbor-database       Opaque  1     7d19h
harbor-registry-harbor-jobservice     Opaque  1     7d19h
harbor-registry-harbor-notary-server  Opaque  5     7d19h
harbor-registry-harbor-registry       Opaque  2     7d19h

==> v1/Service
NAME                                  TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)                      AGE
harbor                                NodePort   10.233.30.18   <none>       80:30088/TCP,4443:30004/TCP  7d19h
harbor-registry-harbor-chartmuseum    ClusterIP  10.233.31.213  <none>       80/TCP                       7d19h
harbor-registry-harbor-clair          ClusterIP  10.233.24.63   <none>       8080/TCP                     7d19h
harbor-registry-harbor-core           ClusterIP  10.233.2.156   <none>       80/TCP                       7d19h
harbor-registry-harbor-database       ClusterIP  10.233.20.149  <none>       5432/TCP                     7d19h
harbor-registry-harbor-jobservice     ClusterIP  10.233.37.70   <none>       80/TCP                       7d19h
harbor-registry-harbor-notary-server  ClusterIP  10.233.36.118  <none>       4443/TCP                     7d19h
harbor-registry-harbor-notary-signer  ClusterIP  10.233.57.215  <none>       7899/TCP                     7d19h
harbor-registry-harbor-portal         ClusterIP  10.233.21.66   <none>       80/TCP                       7d19h
harbor-registry-harbor-redis          ClusterIP  10.233.38.44   <none>       6379/TCP                     7d19h
harbor-registry-harbor-registry       ClusterIP  10.233.45.8    <none>       5000/TCP,8080/TCP            7d19h

==> v1/StatefulSet
NAME                             READY  AGE
harbor-registry-harbor-database  1/1    7d19h
harbor-registry-harbor-redis     0/1    7d19h

harb5
harb4
harb3
harb2
harb1

Logs from my pod harbor-registry-harbor-clair:

ls: /harbor_cust_cert: No such file or directory 
{"Event":"pgsql: could not open database: dial tcp: lookup harbor-registry-harbor-database on 10.233.0.10:53: read udp 10.233.116.225:56363-\u003e10.233.0.10:53: i/o timeout","Level":"fatal","Location":"main.go:97","Time":"2020-05-14 11:44:58.374361"}

Logs from my pod harbor-registry-harbor-jobservice

ls: /harbor_cust_cert: No such file or directory 
2020-05-14T11:46:21Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.oci.image.index.v1+json registered 
2020-05-14T11:46:21Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.distribution.manifest.list.v2+json registered 
2020-05-14T11:46:21Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.distribution.manifest.v1+prettyjws registered 
2020-05-14T11:46:21Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.oci.image.config.v1+json registered 
2020-05-14T11:46:21Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.container.image.v1+json registered 
2020-05-14T11:46:21Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.cncf.helm.config.v1+json registered 
2020-05-14T11:46:21Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.cnab.manifest.v1 registered 
2020-05-14T11:46:21Z [DEBUG] [/pkg/permission/evaluator/rbac/casbin_match.go:65]: Starting regexp store purge in 55m0s 
2020-05-14T11:46:21Z [INFO] [/replication/adapter/native/adapter.go:36]: the factory for adapter docker-registry registered 
2020-05-14T11:46:21Z [INFO] [/replication/adapter/harbor/adaper.go:31]: the factory for adapter harbor registered 
2020-05-14T11:46:21Z [INFO] [/replication/adapter/dockerhub/adapter.go:25]: Factory for adapter docker-hub registered 
2020-05-14T11:46:21Z [INFO] [/replication/adapter/huawei/huawei_adapter.go:27]: the factory of Huawei adapter was registered 
2020-05-14T11:46:21Z [INFO] [/replication/adapter/googlegcr/adapter.go:29]: the factory for adapter google-gcr registered 
2020-05-14T11:46:21Z [INFO] [/replication/adapter/awsecr/adapter.go:47]: the factory for adapter aws-ecr registered 
2020-05-14T11:46:21Z [INFO] [/replication/adapter/azurecr/adapter.go:15]: Factory for adapter azure-acr registered 
2020-05-14T11:46:21Z [INFO] [/replication/adapter/aliacr/adapter.go:31]: the factory for adapter ali-acr registered 
2020-05-14T11:46:21Z [INFO] [/replication/adapter/jfrog/adapter.go:30]: the factory of jfrog artifactory adapter was registered 
2020-05-14T11:46:21Z [INFO] [/replication/adapter/quayio/adapter.go:38]: the factory of Quay.io adapter was registered 
2020-05-14T11:46:21Z [INFO] [/replication/adapter/helmhub/adapter.go:30]: the factory for adapter helm-hub registered 
2020-05-14T11:46:21Z [INFO] [/replication/adapter/gitlab/adapter.go:17]: the factory for adapter gitlab registered 
2020-05-14T11:46:21Z [INFO] [/common/config/store/driver/rest.go:31]: get configuration from url: http://harbor-registry-harbor-core/api/internal/configurations 
2020-05-14T11:46:21Z [ERROR] [/jobservice/logger/sweeper_controller.go:40]: sweep logs error in *sweeper.FileSweeper at 1589456781: getting outdated log files under '/var/log/jobs' failed with error: open /var/log/jobs: permission denied 
2020-05-14T11:46:51Z [ERROR] [/common/config/store/driver/rest.go:34]: Failed on load rest config err:Get http://harbor-registry-harbor-core/api/internal/configurations: dial tcp: i/o timeout, url:http://harbor-registry-harbor-core/api/internal/configurations 
2020-05-14T11:46:51Z [ERROR] [/jobservice/job/impl/context.go:75]: Job context initialization error: failed to load rest config 
2020-05-14T11:46:51Z [INFO] [/jobservice/job/impl/context.go:78]: Retry in 9 seconds 
2020-05-14T11:47:00Z [INFO] [/common/config/store/driver/rest.go:31]: get configuration from url: http://harbor-registry-harbor-core/api/internal/configurations 
2020-05-14T11:47:30Z [ERROR] [/common/config/store/driver/rest.go:34]: Failed on load rest config err:Get http://harbor-registry-harbor-core/api/internal/configurations: dial tcp: i/o timeout, url:http://harbor-registry-harbor-core/api/internal/configurations 
2020-05-14T11:47:30Z [ERROR] [/jobservice/job/impl/context.go:75]: Job context initialization error: failed to load rest config 
2020-05-14T11:47:30Z [INFO] [/jobservice/job/impl/context.go:78]: Retry in 13 seconds 
2020-05-14T11:47:43Z [INFO] [/common/config/store/driver/rest.go:31]: get configuration from url: http://harbor-registry-harbor-core/api/internal/configurations 
2020-05-14T11:48:13Z [ERROR] [/common/config/store/driver/rest.go:34]: Failed on load rest config err:Get http://harbor-registry-harbor-core/api/internal/configurations: dial tcp: i/o timeout, url:http://harbor-registry-harbor-core/api/internal/configurations 
2020-05-14T11:48:13Z [ERROR] [/jobservice/job/impl/context.go:75]: Job context initialization error: failed to load rest config 
2020-05-14T11:48:13Z [INFO] [/jobservice/job/impl/context.go:78]: Retry in 19 seconds 
2020-05-14T11:48:32Z [INFO] [/common/config/store/driver/rest.go:31]: get configuration from url: http://harbor-registry-harbor-core/api/internal/configurations

I would really appreciate it if you would give me a possible solution to the problem. Ready to provide more information.

@reasonerjt
Copy link
Contributor

reasonerjt commented Jun 24, 2020

In your failure jobservice depends on core for the API to get configurations.
And core has to query the DB to respond to the request to the configuration API

At the same time, the log of clair shows that it can access to DB.

It seems to me the network is not correctly setup of your k8s cluster.

@itsecforu
Copy link
Author

@reasonerjt thx i solve it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants