Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conformance results for v1.25/kubeasz #2183

Merged
merged 1 commit into from
Sep 21, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions v1.25/kubeasz/PRODUCT.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
vendor: kubeasz
name: kubeasz
version: 3.4.0
website_url: https://github.com/easzlab/kubeasz
repo_url: https://github.com/easzlab/kubeasz
documentation_url: https://github.com/easzlab/kubeasz/tree/master/docs
product_logo_url: https://gitee.com/easzlab/kubeasz/raw/master/pics/kubeasz.svg
type: installer
description: 'Deploy a Production Ready Kubernetes Cluster with ansible playbooks'
90 changes: 90 additions & 0 deletions v1.25/kubeasz/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# Conformance tests for kubeasz kubernetes cluster

## Node Provisioning

Provision 3 nodes for your cluster (OS: Ubuntu 20.04)

1 master node (4c16g)

2 worker node (4c16g)

for a High-Availability Kubernetes Cluster, read [more](https://github.com/easzlab/kubeasz/blob/master/docs/setup/00-planning_and_overall_intro.md)

## Install the cluster

(1) Download 'kubeasz' code, the binaries and offline images

```
export release=3.4.0
curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
chmod +x ./ezdown
./ezdown -D -m standard
```

(2) install an all-in-one cluster

```
./ezdown -S
docker exec -it kubeasz ezctl start-aio
```

(3) Add two worker nodes

```
ssh-copy-id ${worker1_ip}
ssh ${worker1_ip} ln -s /usr/bin/python3 /usr/bin/python
docker exec -it kubeasz ezctl add-node default ${worker1_ip}
ssh-copy-id ${worker2_ip}
ssh ${worker2_ip} ln -s /usr/bin/python3 /usr/bin/python
docker exec -it kubeasz ezctl add-node default ${worker2_ip}
```

## Run Conformance Test

The standard tool for running these tests is
[Sonobuoy](https://github.com/heptio/sonobuoy). Sonobuoy is
regularly built and kept up to date to execute against all
currently supported versions of kubernetes.

Download a [binary release](https://github.com/heptio/sonobuoy/releases) of the CLI

Deploy a Sonobuoy pod to your cluster with:

```
$ sonobuoy run --mode=certified-conformance
```

**NOTE:** You can run the command synchronously by adding the flag `--wait` but be aware that running the Conformance tests can take an hour or more.

View actively running pods:

```
$ sonobuoy status
```

To inspect the logs:

```
$ sonobuoy logs
```

Once `sonobuoy status` shows the run as `completed`, copy the output directory from the main Sonobuoy pod to a local directory:

```
$ outfile=$(sonobuoy retrieve)
```

This copies a single `.tar.gz` snapshot from the Sonobuoy pod into your local
`.` directory. Extract the contents into `./results` with:

```
mkdir ./results; tar xzf $outfile -C ./results
```

**NOTE:** The two files required for submission are located in the tarball under **plugins/e2e/results/{e2e.log,junit.xml}**.

To clean up Kubernetes objects created by Sonobuoy, run:

```
sonobuoy delete
```
Loading