Skip to content
This repository has been archived by the owner on Nov 5, 2024. It is now read-only.

missing atomix requirement #6

Open
paleozogt opened this issue Apr 16, 2020 · 4 comments
Open

missing atomix requirement #6

paleozogt opened this issue Apr 16, 2020 · 4 comments

Comments

@paleozogt
Copy link

When I try to install I see an error:

helm install charts/onos
Error: found in requirements.yaml, but missing in charts/ directory: atomix

Looking in requirements I see:

cat charts/onos/requirements.yaml
dependencies:
- name: atomix
  version: 0.1.0
  repository: file://../../../atomix-k8s

The path to atomix-k8s is out of this project. Where does it come from?

@SamuAlfageme
Copy link

@paleozogt I think https://github.com/atomix/atomix-helm is the one you're looking for - this post also helped me back in the day: https://blog.zufardhiyaulhaq.com/install-onos-cluster-in-kubernetes/

@paleozogt
Copy link
Author

@SamuAlfageme thanks for the pointer

@ssks3092
Copy link

Hi,

I have cloned the atomix-helm repo and changed the path for requirements.yaml to file://../../../atomix-helm/atomix .

After that i was able to follow the steps provided in that link https//blog.zufardhiyaulhaq.com/install-onos-cluster-in-kubernetes/.

But after running helm install as described in above step, the atomix pods are stuck in pending state and onos pods started running.
After further evaluating the issue i see the dependency on persistence volume claim "Error Pod Not scheduled - pod has unbound immediate PersistentVolumeClaims"

After changing the minikube version to v1.12.0 this error disappeared but now i see that even all the pods for atomix and onos are in running state but the onos pods are getting restarted continuously after some time period and inside the onos logs i see the Storage TimeOut Execption during electing leader. Seems onos is not able to connect to atomix cluster.

Please help me on this i am stuck in this issue for some time now!

@ahmddp
Copy link

ahmddp commented Aug 17, 2020

@ssks3092 I hit the same issue today, and this is my workaround if you're using minikube as me:

  1. In minikube, the dynamic provisioner is already there by default:
kubectl get storageclasses.storage.k8s.io                                                                                                                                              
NAME                 PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
standard (default)   k8s.io/minikube-hostpath   Delete          Immediate           false                  68m
  1. just change the persistent volume claim name to standard, and don't forget to disable pod affinity since there's only a single Kubernetes worker node:
helm install charts/onos \
--name onos \
--set heap=2G \
--set image.tag=1.14.1 \
--set atomix.image.tag=3.0.6 \
--set replicas=1 \
--set atomix.replicas=1 \
--set apps={openflow} \
--set atomix.persistence.size=1Gi \
--set atomix.persistence.storageClass=standard \
--set atomix.podAntiAffinity.enabled=false

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants