Skip to content

Commit

Permalink
Merge branch 'main' into dual-cluster-relay
Browse files Browse the repository at this point in the history
  • Loading branch information
jeromy-cannon authored Feb 14, 2025
2 parents 46c75a9 + 370f57b commit fe1cd5c
Show file tree
Hide file tree
Showing 3 changed files with 224 additions and 4 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/script/gcs_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ else
fi

if [ -z "${STORAGE_TYPE}" ]; then
storageType="aws_only"
storageType="minio_only"
else
storageType=${STORAGE_TYPE}
fi
Expand Down Expand Up @@ -88,7 +88,7 @@ npm run solo-test -- init
npm run solo-test -- cluster setup \
-s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
npm run solo-test -- node keys --gossip-keys --tls-keys -i node1
npm run solo-test -- deployment create -n "${SOLO_NAMESPACE}" --context kind-"${SOLO_CLUSTER_NAME}" --email [email protected] --deployment-clusters kind-"${SOLO_CLUSTER_NAME}" --deployment "${SOLO_DEPLOYMENT}"
npm run solo-test -- deployment create -i node1 -n "${SOLO_NAMESPACE}" --context kind-"${SOLO_CLUSTER_NAME}" --email [email protected] --deployment-clusters kind-"${SOLO_CLUSTER_NAME}" --cluster-ref kind-${SOLO_CLUSTER_NAME} --deployment "${SOLO_DEPLOYMENT}"
npm run solo-test -- network deploy -i node1 --deployment "${SOLO_DEPLOYMENT}" \
--storage-type "${storageType}" \
"${STORAGE_OPTIONS[@]}" \
Expand Down
17 changes: 15 additions & 2 deletions Taskfile.helper.yml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,9 @@ tasks:
- echo "CONSENSUS_NODE_VERSION=${CONSENSUS_NODE_VERSION}"
- echo "SOLO_NAMESPACE=${SOLO_NAMESPACE}"
- echo "SOLO_DEPLOYMENT=${SOLO_DEPLOYMENT}"
- echo "CLUSTER_REF=${CLUSTER_REF}"
- echo "SOLO_CLUSTER_RELEASE_NAME=${SOLO_CLUSTER_RELEASE_NAME}"
- echo "CONTEXT=${CONTEXT}"
- echo "nodes={{ .nodes }}"
- echo "node_identifiers={{ .node_identifiers }}"
- echo "use_port_forwards={{ .use_port_forwards }}"
Expand Down Expand Up @@ -165,7 +167,18 @@ tasks:
deps:
- task: "init"
cmds:
- SOLO_HOME_DIR=${SOLO_HOME_DIR} npm run solo -- deployment create -n {{ .SOLO_NAMESPACE }} --context kind-${SOLO_CLUSTER_NAME} --email {{ .SOLO_EMAIL }} --deployment-clusters kind-${SOLO_CLUSTER_NAME} --cluster-ref kind-${SOLO_CLUSTER_NAME} --deployment "${SOLO_DEPLOYMENT}" --node-aliases {{.node_identifiers}} --dev
- |
if [[ "${CONTEXT}" != "" ]]; then
echo "CONTEXT=${CONTEXT}"
else
export CONTEXT="kind-${SOLO_CLUSTER_NAME}"
fi
if [[ "${CLUSTER_REF}" != "" ]]; then
echo "CLUSTER_REF=${CLUSTER_REF}"
else
export CLUSTER_REF="kind-${SOLO_CLUSTER_NAME}"
fi
SOLO_HOME_DIR=${SOLO_HOME_DIR} npm run solo -- deployment create -n {{ .SOLO_NAMESPACE }} --context ${CONTEXT} --email {{ .SOLO_EMAIL }} --deployment-clusters ${CLUSTER_REF} --cluster-ref ${CLUSTER_REF} --deployment "${SOLO_DEPLOYMENT}" --node-aliases {{.node_identifiers}} --dev
solo:keys:
silent: true
Expand Down Expand Up @@ -197,7 +210,7 @@ tasks:
export CONSENSUS_NODE_FLAG='--release-tag {{.CONSENSUS_NODE_VERSION}}'
fi
if [[ "${SOLO_CHART_VERSION}" != "" ]]; then
export SOLO_CHART_FLAG='--solo-chart-version ${SOLO_CHART_VERSION}'
export SOLO_CHART_FLAG="--solo-chart-version ${SOLO_CHART_VERSION}"
fi
SOLO_HOME_DIR=${SOLO_HOME_DIR} npm run solo -- network deploy --deployment "${SOLO_DEPLOYMENT}" --node-aliases {{.node_identifiers}} ${CONSENSUS_NODE_FLAG} ${SOLO_CHART_FLAG} ${VALUES_FLAG} ${SETTINGS_FLAG} ${LOG4J2_FLAG} ${APPLICATION_PROPERTIES_FLAG} ${GENESIS_THROTTLES_FLAG} ${DEBUG_NODE_FLAG} ${SOLO_CHARTS_DIR_FLAG} ${LOAD_BALANCER_FLAG} ${NETWORK_DEPLOY_EXTRA_FLAGS} -q --dev
- task: "solo:node:setup"
Expand Down
207 changes: 207 additions & 0 deletions test/e2e/dual-cluster/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,207 @@
# Local Dual Cluster Testing
This document describes how to test the dual cluster setup locally.

## Prerequisites
- Make sure you give your Docker sufficient resources
- ? CPUs
- ? GB RAM
- ? GB Swap
- ? GB Disk Space
- If you are tight on resources you might want to make sure that no other Kind clusters are running or anything that is resource heavy on your machine.

## Calling
```bash
# from your Solo root directory run:
./test/e2e/dual-cluster/setup-dual-e2e.sh
```
Output:
```bash
SOLO_CHARTS_DIR:
Deleting cluster "solo-e2e-c1" ...
Deleting cluster "solo-e2e-c2" ...
1051ed73cb755a017c3d578e5c324eef1cae95c606164f97228781db126f80b6
"metrics-server" has been added to your repositories
"metallb" has been added to your repositories
Creating cluster "solo-e2e-c1" ...
✓ Ensuring node image (kindest/node:v1.31.4) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-solo-e2e-c1"
You can now use your cluster with:

kubectl cluster-info --context kind-solo-e2e-c1

Thanks for using kind! 😊
Release "metrics-server" does not exist. Installing it now.
NAME: metrics-server
LAST DEPLOYED: Fri Feb 14 16:04:15 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
***********************************************************************
* Metrics Server *
***********************************************************************
Chart version: 3.12.2
App version: 0.7.2
Image tag: registry.k8s.io/metrics-server/metrics-server:v0.7.2
***********************************************************************
Release "metallb" does not exist. Installing it now.
NAME: metallb
LAST DEPLOYED: Fri Feb 14 16:04:16 2025
NAMESPACE: metallb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
MetalLB is now running in the cluster.

Now you can configure it via its CRs. Please refer to the metallb official docs
on how to use the CRs.
ipaddresspool.metallb.io/local created
l2advertisement.metallb.io/local created
namespace/cluster-diagnostics created
configmap/cluster-diagnostics-cm created
service/cluster-diagnostics-svc created
deployment.apps/cluster-diagnostics created
Creating cluster "solo-e2e-c2" ...
✓ Ensuring node image (kindest/node:v1.31.4) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-solo-e2e-c2"
You can now use your cluster with:

kubectl cluster-info --context kind-solo-e2e-c2

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
Release "metrics-server" does not exist. Installing it now.
NAME: metrics-server
LAST DEPLOYED: Fri Feb 14 16:05:07 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
***********************************************************************
* Metrics Server *
***********************************************************************
Chart version: 3.12.2
App version: 0.7.2
Image tag: registry.k8s.io/metrics-server/metrics-server:v0.7.2
***********************************************************************
Release "metallb" does not exist. Installing it now.
NAME: metallb
LAST DEPLOYED: Fri Feb 14 16:05:08 2025
NAMESPACE: metallb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
MetalLB is now running in the cluster.

Now you can configure it via its CRs. Please refer to the metallb official docs
on how to use the CRs.
ipaddresspool.metallb.io/local created
l2advertisement.metallb.io/local created
namespace/cluster-diagnostics created
configmap/cluster-diagnostics-cm created
service/cluster-diagnostics-svc created
deployment.apps/cluster-diagnostics created

> @hashgraph/[email protected] build
> rm -Rf dist && tsc && node resources/post-build-script.js


> @hashgraph/[email protected] solo
> node --no-deprecation --no-warnings dist/solo.js init


******************************* Solo *********************************************
Version : 0.34.0
Kubernetes Context : kind-solo-e2e-c2
Kubernetes Cluster : kind-solo-e2e-c2
Current Command : init
**********************************************************************************
✔ Setup home directory and cache
✔ Check dependencies
✔ Check dependency: helm [OS: darwin, Release: 23.6.0, Arch: arm64]
✔ Setup chart manager [1s]
✔ Copy templates in '/Users/user/.solo/cache'


***************************************************************************************
Note: solo stores various artifacts (config, logs, keys etc.) in its home directory: /Users/user/.solo
If a full reset is needed, delete the directory or relevant sub-directories before running 'solo init'.
***************************************************************************************
Switched to context "kind-solo-e2e-c1".

> @hashgraph/[email protected] solo
> node --no-deprecation --no-warnings dist/solo.js cluster setup -s solo-setup


******************************* Solo *********************************************
Version : 0.34.0
Kubernetes Context : kind-solo-e2e-c1
Kubernetes Cluster : kind-solo-e2e-c1
Current Command : cluster setup
**********************************************************************************
✔ Initialize
✔ Prepare chart values
✔ Install 'solo-cluster-setup' chart [2s]
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
metallb metallb-system 1 2025-02-14 16:04:16.785411 +0000 UTC deployed metallb-0.14.9 v0.14.9
metrics-server kube-system 1 2025-02-14 16:04:15.593138 +0000 UTC deployed metrics-server-3.12.2 0.7.2
solo-cluster-setup solo-setup 1 2025-02-14 16:05:54.334181 +0000 UTC deployed solo-cluster-setup-0.44.0 0.44.0
Switched to context "kind-solo-e2e-c2".

> @hashgraph/[email protected] solo
> node --no-deprecation --no-warnings dist/solo.js cluster setup -s solo-setup


******************************* Solo *********************************************
Version : 0.34.0
Kubernetes Context : kind-solo-e2e-c2
Kubernetes Cluster : kind-solo-e2e-c2
Current Command : cluster setup
**********************************************************************************
✔ Initialize
✔ Prepare chart values
✔ Install 'solo-cluster-setup' chart [2s]
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
metallb metallb-system 1 2025-02-14 16:05:08.226466 +0000 UTC deployed metallb-0.14.9 v0.14.9
metrics-server kube-system 1 2025-02-14 16:05:07.217358 +0000 UTC deployed metrics-server-3.12.2 0.7.2
solo-cluster-setup solo-setup 1 2025-02-14 16:05:58.114619 +0000 UTC deployed solo-cluster-setup-0.44.0 0.44.0
Switched to context "kind-solo-e2e-c1".
```
## Diagnostics
The `./diagnostics/cluster/deploy.sh` deploys a `cluster-diagnostics` deployment (and its pod) with a service that has its external IP exposed. It is deployed to both clusters, runs Ubuntu, and has most diagnostic software installed. After ran you can shell into the pod and use the container to run your own troubleshooting commands for verifying network connectivity between the two clusters or DNS resolution, etc.

Calling
```bash
# from your Solo root directory run:
$ ./test/e2e/dual-cluster/diagnostics/cluster/deploy.sh
```
Output:
```bash
namespace/cluster-diagnostics unchanged
configmap/cluster-diagnostics-cm unchanged
service/cluster-diagnostics-svc unchanged
deployment.apps/cluster-diagnostics unchanged
```
## Cleanup
Calling
```bash
# from your Solo root directory run:
kind delete clusters cluster1 cluster2
```
Output:
```bash
Deleted clusters: ["cluster1" "cluster2"]
```

0 comments on commit fe1cd5c

Please sign in to comment.