Skip to content

Commit

Permalink
Merge pull request #917 from fluxcd/ingress-v1
Browse files Browse the repository at this point in the history
Upgrade Ingress to networking.k8s.io/v1
  • Loading branch information
stefanprodan authored Jun 1, 2021
2 parents ded658f + e5fdc7a commit e5b8afc
Show file tree
Hide file tree
Showing 12 changed files with 131 additions and 99 deletions.
3 changes: 3 additions & 0 deletions .github/workflows/e2e.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,9 @@ jobs:
uses: actions/checkout@v2
- name: Setup Kubernetes
uses: engineerd/[email protected]
with:
version: "v0.11.0"
image: kindest/node:v1.21.1@sha256:fae9a58f17f18f06aeac9772ca8b5ac680ebbed985e266f711d936e91d113bad
- name: Build container image
run: |
docker build -t test/flagger:latest .
Expand Down
18 changes: 11 additions & 7 deletions docs/gitbook/tutorials/nginx-progressive-delivery.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This guide shows you how to use the NGINX ingress controller and Flagger to auto

## Prerequisites

Flagger requires a Kubernetes cluster **v1.16** or newer and NGINX ingress **v0.41** or newer.
Flagger requires a Kubernetes cluster **v1.19** or newer and NGINX ingress **v0.46** or newer.

Install the NGINX ingress controller with Helm v3:

Expand Down Expand Up @@ -59,7 +59,7 @@ helm upgrade -i flagger-loadtester flagger/loadtester \
Create an ingress definition (replace `app.example.com` with your own domain):

```yaml
apiVersion: networking.k8s.io/v1beta1
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: podinfo
Expand All @@ -70,12 +70,16 @@ metadata:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: app.example.com
- host: "app.example.com"
http:
paths:
- backend:
serviceName: podinfo
servicePort: 80
- pathType: Prefix
path: "/"
backend:
service:
name: podinfo
port:
number: 80
```
Save the above resource as podinfo-ingress.yaml and then apply it:
Expand All @@ -101,7 +105,7 @@ spec:
name: podinfo
# ingress reference
ingressRef:
apiVersion: networking.k8s.io/v1beta1
apiVersion: networking.k8s.io/v1
kind: Ingress
name: podinfo
# HPA reference (optional)
Expand Down
35 changes: 22 additions & 13 deletions docs/gitbook/tutorials/skipper-progressive-delivery.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This guide shows you how to use the [Skipper ingress controller](https://opensou

## Prerequisites

Flagger requires a Kubernetes cluster **v1.16** or newer and Skipper ingress **0.11.40** or newer.
Flagger requires a Kubernetes cluster **v1.19** or newer and Skipper ingress **v0.13** or newer.

Install Skipper ingress-controller using [upstream definition](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/#install-skipper-as-ingress-controller).

Expand Down Expand Up @@ -36,7 +36,9 @@ kustomize build https://github.com/fluxcd/flagger/kustomize/kubernetes | kubectl

## Bootstrap

Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler \(HPA\), then creates a series of objects \(Kubernetes deployments, ClusterIP services and canary ingress\). These objects expose the application outside the cluster and drive the canary analysis and promotion.
Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA),
then creates a series of objects (Kubernetes deployments, ClusterIP services and canary ingress).
These objects expose the application outside the cluster and drive the canary analysis and promotion.

Create a test namespace:

Expand All @@ -60,7 +62,7 @@ helm upgrade -i flagger-loadtester flagger/loadtester \
Create an ingress definition \(replace `app.example.com` with your own domain\):

```yaml
apiVersion: networking.k8s.io/v1beta1
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: podinfo
Expand All @@ -71,12 +73,16 @@ metadata:
kubernetes.io/ingress.class: "skipper"
spec:
rules:
- host: app.example.com
- host: "app.example.com"
http:
paths:
- backend:
serviceName: podinfo
servicePort: 80
- pathType: Prefix
path: "/"
backend:
service:
name: podinfo
port:
number: 80
```
Save the above resource as podinfo-ingress.yaml and then apply it:
Expand All @@ -85,7 +91,7 @@ Save the above resource as podinfo-ingress.yaml and then apply it:
kubectl apply -f ./podinfo-ingress.yaml
```

Create a canary custom resource \(replace `app.example.com` with your own domain\):
Create a canary custom resource (replace `app.example.com` with your own domain):

```yaml
apiVersion: flagger.app/v1beta1
Expand All @@ -102,7 +108,7 @@ spec:
name: podinfo
# ingress reference
ingressRef:
apiVersion: networking.k8s.io/v1beta1
apiVersion: networking.k8s.io/v1
kind: Ingress
name: podinfo
# HPA reference (optional)
Expand Down Expand Up @@ -190,7 +196,9 @@ ingress.networking.k8s.io/podinfo-canary

## Automated canary promotion

Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pod health. Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack or MS Teams.
Flagger implements a control loop that gradually shifts traffic to the canary while measuring
key performance indicators like HTTP requests success rate, requests average duration and pod health.
Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack or MS Teams.

![Flagger Canary Stages](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-canary-steps.png)

Expand Down Expand Up @@ -271,7 +279,8 @@ Generate latency:
watch -n 1 curl http://app.example.com/delay/1
```

When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed.
When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary,
the canary is scaled to zero and the rollout is marked as failed.

```text
kubectl -n flagger-system logs deploy/flagger -f | jq .msg
Expand Down Expand Up @@ -333,7 +342,8 @@ Edit the canary analysis and add the latency check:
interval: 1m
```
The threshold is set to 500ms so if the average request duration in the last minute goes over half a second then the analysis will fail and the canary will not be promoted.
The threshold is set to 500ms so if the average request duration in the last minute goes over half a second
then the analysis will fail and the canary will not be promoted.
Trigger a canary deployment by updating the container image:
Expand Down Expand Up @@ -367,4 +377,3 @@ Canary failed! Scaling down podinfo.test
```

If you have alerting configured, Flagger will send a notification with the reason why the canary failed.

22 changes: 11 additions & 11 deletions pkg/router/ingress.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ import (

"github.com/google/go-cmp/cmp"
"go.uber.org/zap"
"k8s.io/api/networking/v1beta1"
netv1 "k8s.io/api/networking/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
Expand All @@ -48,7 +48,7 @@ func (i *IngressRouter) Reconcile(canary *flaggerv1.Canary) error {
canaryName := fmt.Sprintf("%s-canary", apexName)
canaryIngressName := fmt.Sprintf("%s-canary", canary.Spec.IngressRef.Name)

ingress, err := i.kubeClient.NetworkingV1beta1().Ingresses(canary.Namespace).Get(context.TODO(), canary.Spec.IngressRef.Name, metav1.GetOptions{})
ingress, err := i.kubeClient.NetworkingV1().Ingresses(canary.Namespace).Get(context.TODO(), canary.Spec.IngressRef.Name, metav1.GetOptions{})
if err != nil {
return fmt.Errorf("ingress %s.%s get query error: %w", canary.Spec.IngressRef.Name, canary.Namespace, err)
}
Expand All @@ -59,8 +59,8 @@ func (i *IngressRouter) Reconcile(canary *flaggerv1.Canary) error {
backendExists := false
for k, v := range ingressClone.Spec.Rules {
for x, y := range v.HTTP.Paths {
if y.Backend.ServiceName == apexName {
ingressClone.Spec.Rules[k].HTTP.Paths[x].Backend.ServiceName = canaryName
if y.Backend.Service != nil && y.Backend.Service.Name == apexName {
ingressClone.Spec.Rules[k].HTTP.Paths[x].Backend.Service.Name = canaryName
backendExists = true
}
}
Expand All @@ -70,10 +70,10 @@ func (i *IngressRouter) Reconcile(canary *flaggerv1.Canary) error {
return fmt.Errorf("backend %s not found in ingress %s", apexName, canary.Spec.IngressRef.Name)
}

canaryIngress, err := i.kubeClient.NetworkingV1beta1().Ingresses(canary.Namespace).Get(context.TODO(), canaryIngressName, metav1.GetOptions{})
canaryIngress, err := i.kubeClient.NetworkingV1().Ingresses(canary.Namespace).Get(context.TODO(), canaryIngressName, metav1.GetOptions{})

if errors.IsNotFound(err) {
ing := &v1beta1.Ingress{
ing := &netv1.Ingress{
ObjectMeta: metav1.ObjectMeta{
Name: canaryIngressName,
Namespace: canary.Namespace,
Expand All @@ -90,7 +90,7 @@ func (i *IngressRouter) Reconcile(canary *flaggerv1.Canary) error {
Spec: ingressClone.Spec,
}

_, err := i.kubeClient.NetworkingV1beta1().Ingresses(canary.Namespace).Create(context.TODO(), ing, metav1.CreateOptions{})
_, err := i.kubeClient.NetworkingV1().Ingresses(canary.Namespace).Create(context.TODO(), ing, metav1.CreateOptions{})
if err != nil {
return fmt.Errorf("ingress %s.%s create error: %w", ing.Name, ing.Namespace, err)
}
Expand All @@ -106,7 +106,7 @@ func (i *IngressRouter) Reconcile(canary *flaggerv1.Canary) error {
iClone := canaryIngress.DeepCopy()
iClone.Spec = ingressClone.Spec

_, err := i.kubeClient.NetworkingV1beta1().Ingresses(canary.Namespace).Update(context.TODO(), iClone, metav1.UpdateOptions{})
_, err := i.kubeClient.NetworkingV1().Ingresses(canary.Namespace).Update(context.TODO(), iClone, metav1.UpdateOptions{})
if err != nil {
return fmt.Errorf("ingress %s.%s update error: %w", canaryIngressName, iClone.Namespace, err)
}
Expand All @@ -125,7 +125,7 @@ func (i *IngressRouter) GetRoutes(canary *flaggerv1.Canary) (
err error,
) {
canaryIngressName := fmt.Sprintf("%s-canary", canary.Spec.IngressRef.Name)
canaryIngress, err := i.kubeClient.NetworkingV1beta1().Ingresses(canary.Namespace).Get(context.TODO(), canaryIngressName, metav1.GetOptions{})
canaryIngress, err := i.kubeClient.NetworkingV1().Ingresses(canary.Namespace).Get(context.TODO(), canaryIngressName, metav1.GetOptions{})
if err != nil {
err = fmt.Errorf("ingress %s.%s get query error: %w", canaryIngressName, canary.Namespace, err)
return
Expand Down Expand Up @@ -166,7 +166,7 @@ func (i *IngressRouter) SetRoutes(
_ bool,
) error {
canaryIngressName := fmt.Sprintf("%s-canary", canary.Spec.IngressRef.Name)
canaryIngress, err := i.kubeClient.NetworkingV1beta1().Ingresses(canary.Namespace).Get(context.TODO(), canaryIngressName, metav1.GetOptions{})
canaryIngress, err := i.kubeClient.NetworkingV1().Ingresses(canary.Namespace).Get(context.TODO(), canaryIngressName, metav1.GetOptions{})
if err != nil {
return fmt.Errorf("ingress %s.%s get query error: %w", canaryIngressName, canary.Namespace, err)
}
Expand Down Expand Up @@ -201,7 +201,7 @@ func (i *IngressRouter) SetRoutes(
iClone.Annotations = i.makeAnnotations(iClone.Annotations)
}

_, err = i.kubeClient.NetworkingV1beta1().Ingresses(canary.Namespace).Update(context.TODO(), iClone, metav1.UpdateOptions{})
_, err = i.kubeClient.NetworkingV1().Ingresses(canary.Namespace).Update(context.TODO(), iClone, metav1.UpdateOptions{})
if err != nil {
return fmt.Errorf("ingress %s.%s update error %v", iClone.Name, iClone.Namespace, err)
}
Expand Down
8 changes: 4 additions & 4 deletions pkg/router/ingress_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ func TestIngressRouter_Reconcile(t *testing.T) {
canaryWeightAn := "custom.ingress.kubernetes.io/canary-weight"

canaryName := fmt.Sprintf("%s-canary", mocks.ingressCanary.Spec.IngressRef.Name)
inCanary, err := router.kubeClient.NetworkingV1beta1().Ingresses("default").Get(context.TODO(), canaryName, metav1.GetOptions{})
inCanary, err := router.kubeClient.NetworkingV1().Ingresses("default").Get(context.TODO(), canaryName, metav1.GetOptions{})
require.NoError(t, err)

// test initialisation
Expand Down Expand Up @@ -78,7 +78,7 @@ func TestIngressRouter_GetSetRoutes(t *testing.T) {
canaryWeightAn := "prefix1.nginx.ingress.kubernetes.io/canary-weight"

canaryName := fmt.Sprintf("%s-canary", mocks.ingressCanary.Spec.IngressRef.Name)
inCanary, err := router.kubeClient.NetworkingV1beta1().Ingresses("default").Get(context.TODO(), canaryName, metav1.GetOptions{})
inCanary, err := router.kubeClient.NetworkingV1().Ingresses("default").Get(context.TODO(), canaryName, metav1.GetOptions{})
require.NoError(t, err)

// test rollout
Expand All @@ -92,7 +92,7 @@ func TestIngressRouter_GetSetRoutes(t *testing.T) {
err = router.SetRoutes(mocks.ingressCanary, p, c, m)
require.NoError(t, err)

inCanary, err = router.kubeClient.NetworkingV1beta1().Ingresses("default").Get(context.TODO(), canaryName, metav1.GetOptions{})
inCanary, err = router.kubeClient.NetworkingV1().Ingresses("default").Get(context.TODO(), canaryName, metav1.GetOptions{})
require.NoError(t, err)

// test promotion
Expand Down Expand Up @@ -175,7 +175,7 @@ func TestIngressRouter_ABTest(t *testing.T) {
canaryAn := router.GetAnnotationWithPrefix("canary")

canaryName := fmt.Sprintf("%s-canary", table.makeCanary().Spec.IngressRef.Name)
inCanary, err := router.kubeClient.NetworkingV1beta1().Ingresses("default").Get(context.TODO(), canaryName, metav1.GetOptions{})
inCanary, err := router.kubeClient.NetworkingV1().Ingresses("default").Get(context.TODO(), canaryName, metav1.GetOptions{})
require.NoError(t, err)

// test initialisation
Expand Down
30 changes: 17 additions & 13 deletions pkg/router/router_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ import (
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/api/networking/v1beta1"
netv1 "k8s.io/api/networking/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/client-go/kubernetes"
Expand Down Expand Up @@ -415,7 +415,7 @@ func newTestCanaryIngress() *flaggerv1.Canary {
},
IngressRef: &flaggerv1.CrossNamespaceObjectReference{
Name: "podinfo",
APIVersion: "extensions/v1beta1",
APIVersion: "networking.k8s.io/v1",
Kind: "Ingress",
},
Service: flaggerv1.CanaryService{
Expand All @@ -437,28 +437,32 @@ func newTestCanaryIngress() *flaggerv1.Canary {
return cd
}

func newTestIngress() *v1beta1.Ingress {
return &v1beta1.Ingress{
TypeMeta: metav1.TypeMeta{APIVersion: v1beta1.SchemeGroupVersion.String()},
func newTestIngress() *netv1.Ingress {
return &netv1.Ingress{
TypeMeta: metav1.TypeMeta{APIVersion: netv1.SchemeGroupVersion.String()},
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: "podinfo",
Annotations: map[string]string{
"kubernetes.io/ingress.class": "nginx",
},
},
Spec: v1beta1.IngressSpec{
Rules: []v1beta1.IngressRule{
Spec: netv1.IngressSpec{
Rules: []netv1.IngressRule{
{
Host: "app.example.com",
IngressRuleValue: v1beta1.IngressRuleValue{
HTTP: &v1beta1.HTTPIngressRuleValue{
Paths: []v1beta1.HTTPIngressPath{
IngressRuleValue: netv1.IngressRuleValue{
HTTP: &netv1.HTTPIngressRuleValue{
Paths: []netv1.HTTPIngressPath{
{
Path: "/",
Backend: v1beta1.IngressBackend{
ServiceName: "podinfo",
ServicePort: intstr.FromInt(9898),
Backend: netv1.IngressBackend{
Service: &netv1.IngressServiceBackend{
Name: "podinfo",
Port: netv1.ServiceBackendPort{
Number: 9898,
},
},
},
},
},
Expand Down
Loading

0 comments on commit e5b8afc

Please sign in to comment.