This is repo is considered beta status.
For Elasticsearch/Search Guard 5 please refer to: https://github.com/floragunncom/search-guard-helm/tree/5.x
Please report issues via GitHub issue tracker or get in contact with us
- Kubernetes 1.10 or later (Minikube and AWS EKS are tested)
- Helm (tested with Helm v2.11.0)
- kubectl
- Optional: Docker, if you like to build and push customized images
If you use Minikube make sure that the VM has enough memory and CPUs assigned. We recommend at least 8 GB and 4 CPUs. By default, we deploy 5 pods (includes also Kibana).
You need to have the aws cli installed and configured
./examples/sg_aws_kops.sh -c mytestcluster
Delete the cluster when you are finished with testing Search Guard
./examples/sg_aws_kops.sh -d mytestcluster
If you do not have any running Kubernetes cluster and you just want to try out our helm chart then go with Minikube
If Minikube is not already configured or running:
Please refer to https://kubernetes.io/docs/setup/minikube/ and https://github.com/kubernetes/minikube
Install https://www.virtualbox.org/wiki/Downloads
brew install kubectl kubernetes-helm
brew cask install minikube
Install https://www.virtualbox.org/wiki/Downloads or https://www.linux-kvm.org/page/Main_Page
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin/ && rm minikube
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo cp kubectl /usr/local/bin/ && rm kubectl
minikube config set memory 8192
minikube config set cpus 4
minikube delete
minikube start
If Minikube is already configured/running make sure it has least 8 GB and 4 CPUs assigned:
minikube config view
If not then execute the steps above (Warning: minikube delete
will delete your Minikube VM).
If the Helm tiller pod is not already running on your cluster
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --wait --service-account tiller --upgrade
helm repo add sg-helm https://floragunncom.github.io/search-guard-helm
helm search "search guard"
helm install --name sg-elk sg-helm/sg-helm --version 6.5.4-24.0-17.0-beta3
Please refer to the Helm Documentation on how to override the chart default
settings. See sg-helm/values.yaml
for the documented set of settings you can override.
Optionally read the comments in sg-helm/values.yaml
and customize them to suit your needs.
$ git clone https://github.com/floragunncom/search-guard-helm.git
$ helm install search-guard-helm/sg-helm
Check minikube dashboard
and wait until all pods are running and green (can take up to 15 minutes)
export POD_NAME=$(kubectl get pods --namespace default -l "component=sg-elk-sg-helm,role=kibana" -o jsonpath="{.items[0].metadata.name}")
echo "Visit https://127.0.0.1:5601 and login with admin/admin to use Kibana"
kubectl port-forward --namespace default $POD_NAME 5601:5601
Passwords for the admin users, the Kibana user, the Kibana server and the Kibana cookie are generated randomly on initial deployment.
They are stored in a secret named passwd-secret
. All TLS certificates including a Root CA are also generated randomly. You can find
the root ca in a secret named root-ca-secret
, the admin certificate in admin-cert-secret
and the node certificates in nodes-cert-secret
.
Whenever a node pod restarts we create a new certificate and remove the old one from nodes-cert-secret
.
-
The nodes are initially automatically initialized and configured
-
To change the configuration
- Edit
sg-helm/values.yaml
and runhelm upgrade
. The pods will be reconfigured or restarted if necessary - or run
helm upgrade --values
orhelm upgrade --set
. The pods will be reconfigured or restarted if necessary
- Edit
-
Alternatively you can exec into the sgadmin pod and run low-level sgadmin commands (experts only):
WARNING(!): You currently can not update sg_internal_users.yml because of the random passwords. If you do this anyhow you may lock you out of the cluster.
$ kubectl exec -it sg-elk-sg-helm-sgadmin-555b5f7df-9sqrm bash [root@sg-elk-sg-helm-sgadmin-555b5f7df-9sqrm ~]# /root/sgadmin/tools/sgadmin.sh -h $DISCOVERY_SERVICE -si -icl -key /root/sgcerts/key.pem -cert /root/sgcerts/crt.pem -cacert /root/sgcerts/root-ca.pem
In that case, refer to the documentation of
update_sgconfig_on_change
insg-helm/values.yaml
so that your changes will not be overridden accidentally.
- https://github.com/lalamove/helm-elasticsearch
- https://github.com/pires/kubernetes-elasticsearch-cluster
- https://github.com/kubernetes/charts/tree/master/incubator/elasticsearch
- https://github.com/clockworksoul/helm-elasticsearch
Copyright 2018 floragunn GmbH
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.