diff --git a/.travis/build.sh b/.travis/build.sh index 8851adbd..f8df35b7 100755 --- a/.travis/build.sh +++ b/.travis/build.sh @@ -26,6 +26,7 @@ mvn spotbugs:check # Run testsuite with java 8 only if [ ${JAVA_MAJOR_VERSION} -eq 1 ] ; then + docker pull oryd/hydra:v1.0.0 mvn test-compile spotbugs:check -e -V -B -f testsuite set +e mvn -e -V -B install -f testsuite diff --git a/HACKING.md b/HACKING.md new file mode 100644 index 00000000..d6d1b326 --- /dev/null +++ b/HACKING.md @@ -0,0 +1,501 @@ +Building and deploying Strimzi Kafka OAuth +========================================== + +You only need Java 8, and Maven to build this project. + +However, you may want to rebuild [Strimzi Kafka Operator](https://github.com/strimzi/strimzi-kafka-operator) project components and images to try your changes on Kubernetes. +Setting up a build environment for that is not trivial, so we have prepared a docker image with all the necessary build tools. + +We call it `Strimzi Development CLI Image`. You can find instructions for how to use it [here](https://github.com/mstruk/strimzi-kafka-operator/blob/hacking/HACKING-cli-image.md). + +In order to build `Strimzi Kafka Operator` images you need a running Docker daemon. +If you also want to try them out, deploying the Kafka Cluster Operator, and running a test cluster on Kubernetes, you need access to Kubernetes API server. + +There are several locally running options for Kubernetes: [Minikube](https://github.com/kubernetes/minikube), [Minishift](https://github.com/minishift/minishift), [Kubernetes Kind](https://github.com/kubernetes-sigs/kind), possibly others ... + +You can read more about quickly setting up the local Kubernetes cluster of your choice in our [Quickstarts](https://strimzi.io/quickstarts/). + +However, if you're starting completely from scratch, and are using Ubuntu 18.04 LTS, or MacOS you may prefer a thorough step-by-step procedure for installing Docker, Kubernetes Kind, and using Strimzi Developer CLI Image shell session to run a Strimzi managed Kafka cluster with latest source build of Strimzi Kafka OAuth. + +In that case follow instructions that apply to your environment in the following chapter. + + + +- [Preparing the host environment](#preparing-the-host-environment) + - [Ubuntu 18.04 LTS](#ubuntu-1804-lts) + - [Docker Desktop for Mac](#docker-desktop-for-mac) +- [Starting up the environment](#starting-up-the-environment) + - [Deploying and validating Docker Registry](#deploying-and-validating-docker-registry) + - [Creating and validating the Kind Kubernetes cluster](#creating-and-validating-the-kind-kubernetes-cluster) + - [Starting and validating Strimzi Dev CLI](#starting-and-validating-strimzi-dev-cli) +- [Building Strimzi Kafka OAuth](#building-strimzi-kafka-oauth) +- [Deploying development builds with Strimzi Kafka Operator](#deploying-development-builds-with-strimzi-kafka-operator) + - [Building Strimzi Kafka images with SNAPSHOT version of Strimzi Kafka OAuth](#building-strimzi-kafka-images-with-snapshot-version-of-strimzi-kafka-oauth) + - [Building a custom Strimzi Kafka 'override' image based on existing one](#building-a-custom-strimzi-kafka-oveeide-omage-based-on-existing-one) + - [Configuring Kubernetes permissions](#configuring-kubernetes-permissions) + - [Deploying Kafka operator and Kafka cluster](#deploying-kafka-operator-and-kafka-cluster) + - [Deploying a Kafka cluster configured with OAuth 2 authentication](#deploying-a-kafka-cluster-configured-with-oauth-2-authentication) + - [Exploring the Kafka container](#exploring-the-kafka-container) +- [Troubleshooting](#troubleshooting) + + + +Preparing the host environment +------------------------------ + +### Ubuntu 18.04 LTS + +#### Installing and configuring Docker daemon + +Run the following commands to install the Docker package from the Ubuntu repository: + + sudo apt-get update + sudo apt-get remove docker docker-engine docker.io + sudo apt install docker.io + + +Run the following to configure the Docker daemon to trust a local Docker Registry listening on port 5000: + +``` +export REGISTRY_IP=$(ifconfig docker0 | grep 'inet ' | awk '{print $2}') && echo $REGISTRY_IP + +sudo cat << EOF > /etc/docker/daemon.json +{ + "debug": true, + "experimental": false, + "insecure-registries": [ + "${REGISTRY_IP}:5000" + ] +} +EOF +``` + +Start or restart the daemon: + + sudo systemctl restart docker + +Enable the daemon so it's automatically started when the system boots up: + + sudo systemctl enable docker + +Fix Permission denied issue when running `docker` client: + + sudo groupadd docker + sudo usermod -aG docker $USER + +Test that everything works: + + docker ps + + +#### Installing `kubectl` + +Ubuntu supports `snaps` which is the easiest way to install `kubectl`: + + snap install kubectl --classic + kubectl version + + +#### Installing Kubernetes Kind + +[Kubernetes Kind](https://github.com/kubernetes-sigs/kind) is a Kubernetes implementation that runs on Docker. That makes it simple to install, and convenient to use. + +You can install by running the following: + + curl -Lo ./kind "https://github.com/kubernetes-sigs/kind/releases/download/v0.7.0/kind-$(uname)-amd64" + chmod +x ./kind + sudo mv ./kind /usr/local/bin/kind + + +### Docker Desktop for Mac + +On MacOS the most convenient option for Docker is to use [Docker Desktop](https://www.docker.com/products/docker-desktop). + + +#### Configuring Docker daemon + +Using Docker Desktop for Mac, first make sure to assign enough memory. +Double-click on the Docker icon in tool bar and select 'Preferences', then 'Resources'. + +The actual requirements for resources dependends on what exactly you'll be doing but the following configuration should be enough if you want to do small cluster deployments with Strimzi. + +Under Memory select 5 GB. For swap size select at least 2 GB. For CPUs select at least 2 (that's quite important). + +Click the `Apply & Restart` button. + +We'll setup Kind to use a local Docker Registry deployed as a Docker container. +In order to allow non-tls connectivity between Docker daemon and Docker Registry we need configure the Docker daemon. + +Open a Terminal and type: + + export REGISTRY_IP=$(ifconfig en0 | grep 'inet ' | awk '{print $2}') \ + && echo $REGISTRY_IP + +Move on to 'Preferences' / 'Docker Engine' tab. + +There is a Docker Engine configuration file. Its current content typically looks something like: + +``` +{ + "debug": true, + "experimental": false +} +``` + +Add another array attribute called `insecure-registries` that will contain our $REGISTRY_IP and port. +For example: +``` +{ + "debug": true, + "experimental": false, + "insecure-registries": [ + "192.168.1.10:5000" <<< Use your REGISTRY_IP + ] +} +``` + +Click the `Apply & Restart` button again. + + +#### Installing `kubectl` + +The simplest way to install `kubectl` on MacOS is to use Homebrew: + + brew install kubectl + kubectl version + + +#### Installing Kubernetes Kind + +[Kubernetes Kind](https://github.com/kubernetes-sigs/kind) is a Kubernetes implementation that runs on Docker. That makes it simple to install, and convenient to use. + +On MacOS the most convenient way to install is to use Homebrew: + + brew install kind + + +Starting up the environment +--------------------------- + +The rest of what we do is platform independent. All we need are a working `docker`, `kind`, and `kubectl`. + +Everytime you start a new Terminal shell, make sure to set the following ENV variables: + +``` +export REGISTRY_IP= +export KIND_CLUSTER_NAME=kind +export REGISTRY_NAME=docker-registry +export REGISTRY_PORT=5000 +``` + +### Deploying and validating Docker Registry + +Execute the following: + + docker run -d --restart=always -p "$REGISTRY_PORT:$REGISTRY_PORT" --name "$REGISTRY_NAME" registry:2 + +The registry should be up an running within a few seconds. + +Let's make sure that we can push images to the registry using the $REGISTRY_IP: + +``` +docker pull gcr.io/google-samples/hello-app:1.0 +docker tag gcr.io/google-samples/hello-app:1.0 $REGISTRY_IP:$REGISTRY_PORT/hello-app:1.0 +docker push $REGISTRY_IP:$REGISTRY_PORT/hello-app:1.0 +``` + +### Creating and validating the Kind Kubernetes cluster + +When starting Kind we need to pass some extra configuration to allow the Kubernetes instance to connect to the insecure Docker Registry from a previous step. + +``` +cat << EOF | kind create cluster --name "${KIND_CLUSTER_NAME}" --config=- +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."$REGISTRY_IP:$REGISTRY_PORT"] + endpoint = ["http://$REGISTRY_IP:$REGISTRY_PORT"] +EOF +``` + +Note, how we use `http` in `endpoint` value, which does the trick. + +Let's make sure we can deploy a Kubernetes Pod using an image from a local Docker Registry: + + docker tag gcr.io/google-samples/hello-app:1.0 $REGISTRY_IP:$REGISTRY_PORT/hello-app:1.0 + kubectl create deployment hello-server --image=$REGISTRY_IP:$REGISTRY_PORT/hello-app:1.0 + kubectl get pod + +By repeating the last command we should, after a few seconds, see it turn to `Running` status for `hello-server-*`. +If there is an error status, see [Troubleshooting](#troubleshooting) chapter. + +You can now remove the deployment: + + kubectl delete deployment hello-server + +One important thing before deploying Strimzi Kafka Operator on Kind is to give the system account 'cluster-admin' permissions: + + kubectl create clusterrolebinding strimzi-cluster-operator-cluster-admin --clusterrole=cluster-admin --serviceaccount=default:strimzi-cluster-operator + +### Starting and validating Strimzi Dev CLI + +In a new Terminal shell execute the following: + +``` +# Get internal configuration for access from within the container: +kind get kubeconfig --internal > ~/.kube/internal-kubeconfig + +# Make sure to use latest version of the image +docker pull quay.io/mstruk/strimzi-dev-cli + +# set DEV_DIR to a directory where you have your cloned git repositories +# You'll be able to access this directory from within Strimzi Dev CLI container +export DEV_DIR=$HOME/devel + +# Now run the container +docker run -ti --name strimzi-dev-cli -v /var/run/docker.sock:/var/run/docker.sock -v $HOME/.kube:/root/.kube -v $DEV_DIR:/root/devel -v $HOME/.m2:/root/.m2:cached quay.io/mstruk/strimzi-dev-cli /bin/sh +``` + +Note: If you exit the container or it gets shut down, as long as it's not manually deleted you can reattach and continue your interactive session: + + docker start strimzi-dev-cli + docker attach strimzi-dev-cli + +Having started the interactive session you are now in the development environment where you have all the necessary tools including `docker`, `kind`, `kubectl`, `git`, `mvn` and all the rest you need to build Strimzi Kafka Operator components. + +Let's make sure that `docker`, and `kubectl` work: + +``` +export KUBECONFIG=~/.kube/internal-kubeconfig +kubectl get ns +docker ps +``` + +Also, let's make sure that we can push to the local Docker Registry from Strimzi Dev CLI: + +``` +# Set REGISTRY_IP to the same value it has in the other Terminal session +export REGISTRY_IP= + +export REGISTRY_PORT=5000 + +# test docker push to the local repository +docker tag gcr.io/google-samples/hello-app:1.0 $REGISTRY_IP:$REGISTRY_PORT/hello-app:1.0 +docker push $REGISTRY_IP:$REGISTRY_PORT/hello-app:1.0 + +``` + +Building Strimzi Kafka OAuth +---------------------------- + +If you have not yet cloned the source repository it's time to do it now. +It's best to fork the project and clone your fork, but you can also just clone the upstream repository. + +``` +cd /root/devel +git clone https://github.com/strimzi/strimzi-kafka-oauth.git +cd strimzi-kafka-oauth + +# Build it +mvn clean spotbugs:check install + +# Make sure it's PR ready +# Sorry, unfortunately the testsuite doesn't seem to be working inside strimzi-dev-cli +# .travis/build.sh +``` + +Deploying development builds with Strimzi Kafka Operator +-------------------------------------------------------- + +After successfully building Strimzi Kafka OAuth artifacts you can include them into Strimzi Kafka images by following either of these two approaches: + +* Rebuild Strimzi Kafka Operator project images from source, referring to SNAPSHOT (non-released) Strimzi Kafka OAuth artifacts +* Build custom Strimzi component images based on existing ones + + +### Building Strimzi Kafka images with SNAPSHOT version of Strimzi Kafka OAuth + +Let's clone the upstream repository: + +``` +cd /root/devel +git clone https://github.com/strimzi/strimzi-kafka-operator.git +cd strimzi-kafka-operator +``` + +We have to update the oauth library dependency version: + + sed -Ei 's#[0-9a-zA-Z.-]+#1.0.0-SNAPSHOT#g' \ + pom.xml \ + docker-images/kafka/kafka-thirdparty-libs/2.3.x/pom.xml \ + docker-images/kafka/kafka-thirdparty-libs/2.4.x/pom.xml + +This makes sure the latest strimzi-kafka-oauth library that we built previously is included into Kafka images that we'll build next. +We can check the change: + + git diff + +We're ready to build a SNAPSHOT version of strimzi-kafka-operator. + + MVN_ARGS=-DskipTests make clean docker_build + +Build Strimzi Docker images containing Kafka with Strimzi OAuth support: + + export DOCKER_REG=$REGISTRY_IP:$REGISTRY_PORT + DOCKER_REGISTRY=$DOCKER_REG DOCKER_ORG=strimzi make docker_push + +If everything went right we should have the built images in our local Docker Registry. + + docker images | grep $REGISTRY_IP:$REGISTRY_PORT + +Note, that if you make changes to Strimzi Kafka OAuth and have to rebuild with the new images you can just do the following +instead of doing the whole `docker_build` again: + + make -C docker-images clean build + DOCKER_REGISTRY=$DOCKER_REG DOCKER_ORG=strimzi make docker_push + +Let's make sure the SNAPSHOT Strimzi OAuth libraries are included. + + docker run --rm -ti $DOCKER_REG/strimzi/kafka:latest-kafka-2.4.0 /bin/sh -c 'ls -la /opt/kafka/libs/kafka-oauth*' + +This executes a `ls` command inside a new Kafka container, which it removes afterwards. +The deployed version should be 1.0.0-SNAPSHOT. + + +### Building a custom Strimzi Kafka 'override' image based on existing one + +Instead of rebuilding the whole Strimzi Kafka Operator project to produce initial Kafka images, we can simply adjust an existing image so that our newly built libraries are used instead of the ones already present in the image. + +We can simply use a `docker build` command with a custom Dockerfile. This is very convenient for quick iterative development. +Any step you can shorten can cumulatively save you a lot of time, and building the whole Strimzi Kafka Operator project, for example, takes quite some time. + +You can follow the instructions in the previous chapter to build the initial local version of Strimzi Kafka Operator and Strimzi Kafka images. The 'override' images can then be based on ones you built from source. + +Alternatively, you can avoid cloning and building the Strimzi Kafka Operator project altogether by basing the 'override' image on an existing, publicly available, Strimzi Kafka image. + +In `examples/docker/strimzi-kafka-image` there is a build project that takes the latest strimzi/kafka image, and adds another layer to it where it copies latest SNAPSHOT kafka-oauth libraries into the image, and prepends the directory containing them to the CLASSPATH thus making sure they override the previously packaged versions, and their dependencies. + +See [README.md](examples/docker/strimzi-kafka-image/README.md) for instructions on how to build and use the 'override' Kafka image. + + +### Configuring Kubernetes permissions + +Make sure to give the `strimzi-cluster-operator` service account the necessary permissions. +It depends on the Kubernetes implementation you're using how to achieve that. + +Some permissions issues may be due to a mismatch between `namespace` values in `install/cluster-operator/*RoleBinding*` files, and the namespace used when deploying the Kafka operator. +You can either address namespace mismatch by editing `*RoleBindig*` files, or deploy into a different namespace using `kubectl apply -n NAMESPACE ...`, possibly both. + +For example, on `Minikube` and `Kind` the simplest approach is to change the namespace to `default` and keep deploying to `default` namespace: + + sed -Ei -e 's/namespace: .*/namespace: default/' install/cluster-operator/*RoleBinding*.yaml + +Or you can grant sweeping permissions to the `strimzi-cluster-operator` service account: + + kubectl create clusterrolebinding strimzi-cluster-operator-cluster-admin --clusterrole=cluster-admin --serviceaccount=default:strimzi-cluster-operator + +Using `Minishift` you can run: + + oc login -u system:admin + oc adm policy add-cluster-role-to-user cluster-admin developer + oc login -u developer + +### Deploying Kafka operator and Kafka cluster + +You can deploy Strimzi Cluster Operator the usual way: + + kubectl apply -f install/cluster-operator + +Note: If you're running Kubernetes within a VM you need at least 2 CPUs. + +Using `kubectl get pod` you should see the status of strimzi-cluster-operator pod become `Running` after a few seconds. + +If not, you can see what's going on by running: + + kubectl describe pod $(kubectl get pod | grep strimzi-cluster-operator | awk '{print $1}') + + +You can then deploy an example Kafka cluster: + + kubectl apply -f examples/kafka/kafka-ephemeral-single.yaml + +Make sure the `kafka-oauth-*` libraries are present: + + kubectl exec my-cluster-kafka-0 /bin/sh -c 'ls libs/oauth' + +You can follow the Kafka broker log: + + kubectl logs my-cluster-kafka-0 -c kafka -f + +### Deploying a Kafka cluster configured with OAuth 2 authentication + +Rather than using a basic Kafka cluster without any authentication you'll need one with OAuth 2 authentication and / or authorization in order to test Strimzi Kafka OAuth. + +For examples of deploying such a cluster see [/examples/kubernetes/README.md](examples/kubernetes/README.md) + +### Exploring the Kafka container + +You can explore the Kafka container more by starting it in interactive mode: + + docker run --rm -ti $DOCKER_REG/strimzi/kafka:latest-kafka-2.4.0 /bin/sh + +Here you've just started another interactive container from within the existing interactive container session. +Pretty neat! + +Let's set a custom prompt so we don't get confused which session it is: + + export PS1="strimzi-kafka\$ " + +Let's check oauth library versions: + + ls -la /opt/kafka/libs/kafka-oauth* + +Once you're done exploring, leave the container by issuing: + + exit + +The interactive container will automatically get deleted because we used `--rm` option. + + +Troubleshooting +--------------- + +### Error message: Server gave HTTP response to HTTPS client + +When pushing to Docker Registry, for example, when running: + + DOCKER_REGISTRY=$DOCKER_REG DOCKER_ORG=strimzi make docker_push + +You get an error like: + + Get https://192.168.1.86:5000/v2/: http: server gave HTTP response to HTTPS client + +The reason is that Docker Daemon hasn't been configured to treat Docker Registry with the specified IP as an insecure repository. + +If you are switching between WiFis, your local network IP keeps changing. If using Kind, the mirror configuration used when starting Kind to allow access to insecure registry over http may be out of sync with your current local network IP. +Removing the current Kind cluster and creating a new one should solve the issue. + +You may also have to update your Docker Desktop or Docker Daemon configuration to add the new IP to `insecure-registries`. + +See [Configuring Docker daemon](#configuring-docker-daemon) for how to configure `insecure-registries`. + + +### Error message: node(s) already exist for a cluster with the name "kind" + +When creating a new Kubernetes cluster with Kind you can get this error. +It means that the cluster exists already, but you may not see it when you do: + + docker ps + +Try the following: + + docker ps -a | grep kind-control-plane + +The `kind-control-plane` container may simply be stopped, and you can restart it with: + + docker start kind-control-plane + +If you're in an environment where your local network ip changes (moving around with a laptop for example) it's safest to just remove the cluster and then create it from scratch: + + kind delete cluster diff --git a/RELEASE_NOTES.md b/RELEASE_NOTES.md new file mode 100644 index 00000000..cdd7a91f --- /dev/null +++ b/RELEASE_NOTES.md @@ -0,0 +1,71 @@ +Release Notes +============= + +0.4.0 +----- + +### Deprecated configuration options + +The following configuration options have been deprecated: +* `oauth.tokens.not.jwt` is now called `oauth.access.token.is.jwt` and has a reverse meaning. +* `oauth.validation.skip.type.check` is now called `oauth.check.access.token.type` and has a reverse meaning. + + +See: Align configuration with Kafka Operator PR ([#36](https://github.com/strimzi/strimzi-kafka-oauth/pull/36)). + +### Compatibility improvements + +Scope claim is no longer required in an access token. ([#30](https://github.com/strimzi/strimzi-kafka-oauth/pull/30)) +That improves compatibility with different authorization servers, since the attribute is not required by OAuth 2.0 specification neither is it used by validation logic. + +### Updated dependencies + +`jackson-core`, and `jackson-databind` libraries have been updated to latest versions. ([#33](https://github.com/strimzi/strimzi-kafka-oauth/pull/33)) + +### Instructions for developers added + +Instructions for preparing the environment, building and deploying the latest version of Strimzi Kafka OAuth library with Strimzi Kafka Operator have been added. + +See: Hacking on OAuth and deploying with Strimzi Kafka Operator PR ([#34](https://github.com/strimzi/strimzi-kafka-oauth/pull/34)) + +### Improvements to examples and documentation + +Fixed enabled remote debugging mode in example `compose-authz.yml` ([#36](https://github.com/strimzi/strimzi-kafka-oauth/pull/36)) + +0.3.0 +----- + +### Token-based authorization with Keycloak Authorization Services + +It is now possible to use Keycloak Authorization Services to centrally manage access control to resources on Kafka Brokers ([#36](https://github.com/strimzi/strimzi-kafka-oauth/pull/36)) +See the [tutorial](examples/README-authz.md) which explains many concepts. +For configuration details also see [KeycloakRBACAuthorizer JavaDoc](oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/KeycloakRBACAuthorizer.java). + +### ECDSA signature verification support + +The JWTSignatureValidator now supports ECDSA signatures, but requires explicit enablement of BouncyCastle security provider ([#25](https://github.com/strimzi/strimzi-kafka-oauth/pull/25)) +To enable BouncyCastle set `oauth.crypto.provider.bouncycastle` to `true`. +Optionally you may control the order where the provider is installed by using `oauth.crypto.provider.bouncycastle.position` - by default it is installed at the end of the list of existing providers. + +0.2.0 +----- + +### Testsuite with integration tests + +A testsuite based on Arquillian Cube, and using docker containers was added. + +### Examples improvements + +Added Ory Hydra authorization server to examples. + +0.1.0 +----- + +### Initial OAuth 2 authentication support for Kafka + +Support for token-based authentication that plugs into Kafka's SASL_OAUTHBEARER mechanism to provide: +* Different ways of access token retrieval for Kafka clients (clientId + secret, refresh token, or direct access token) +* Fast signature-checking token validation mechanism (using authorization server's JWKS endpoint) +* Introspection based token validation mechanism (using authorization server's introspection endpoint) + +See the [tutorial](examples/README.md). \ No newline at end of file diff --git a/examples/docker/strimzi-kafka-image/Dockerfile b/examples/docker/strimzi-kafka-image/Dockerfile new file mode 100644 index 00000000..840c81a4 --- /dev/null +++ b/examples/docker/strimzi-kafka-image/Dockerfile @@ -0,0 +1,4 @@ +FROM strimzi/kafka:latest-kafka-2.4.0 + +COPY target/libs/* /opt/kafka/libs/oauth/ +ENV CLASSPATH /opt/kafka/libs/oauth/* diff --git a/examples/docker/strimzi-kafka-image/README.md b/examples/docker/strimzi-kafka-image/README.md new file mode 100644 index 00000000..6979f923 --- /dev/null +++ b/examples/docker/strimzi-kafka-image/README.md @@ -0,0 +1,97 @@ +Strimzi Kafka Image with SNAPSHOT Strimzi Kafka OAuth +===================================================== + +This is a build of a Docker image based on `strimzi/kafka:latest-kafka-2.4.0` with added most recently locally built SNAPSHOT version of Strimzi Kafka OAuth libraries. + +This image adds a `/opt/kafka/libs/oauth` directory, and copies the latest jars for OAuth support in it. +Then it puts this directory as the first directory on the classpath. + +The result is that the most recent Strimzi Kafka OAuth jars and their dependencies are used, because they appear on the classpath before the ones that are part of `strimzi/kafka:latest-kafka-2.4.0` which are located in the `/opt/kafka/libs` directory. + + +Building +-------- + +Use `docker build` to build the image: + + docker build -t strimzi/kafka:latest-kafka-2.4.0-oauth . + +You can choose a different tag if you want. + +Also, take a look at Dockerfile: + + less Dockerfile + +Note the `FROM` directive in the first line. It uses image coordinates to the latest publicly available Strimzi Kafka 2.4.0 image. + +You may want to adjust this to a different public image, or to one manually built previously and is only available in your private Docker Registry. + +For example, if you want to base your image on Strimzi Kafka 2.3.1 use `FROM strimzi/kafka:latest-kafka-2.3.1`. + + +Validating +---------- + +You can start an interactive shell container and confirm that the jars are there. + + docker run --rm -ti strimzi/kafka:latest-kafka-2.4.0-oauth /bin/sh + ls -la libs/oauth/ + echo "$CLASSPATH" + +If you want to play around more within the container you may need to make yourself `root`. + +You achieve that by running the docker session as `root` user: + + docker run --rm -ti --user root strimzi/kafka:latest-kafka-2.4.0-oauth /bin/sh + + + +Pushing the image to a Docker Repository +-------------------------------------- + +For Kubernetes to be able to use our image it needs to be pushed to either a public repository or to the private Docker Repository used by your Kubernetes distro. + +For example if you are using Kubernetes Kind as described in [HACKING.md](../../../HACKING.md) then your Docker Repository is listening on port 5000 of your local ethernet IP. + + # On MacOS + export REGISTRY_IP=$(ifconfig en0 | grep 'inet ' | awk '{print $2}') && echo $REGISTRY_IP + + # On Linux + #export REGISTRY_IP=$(ifconfig docker0 | grep 'inet ' | awk '{print $2}') && echo $REGISTRY_IP + + export DOCKER_REG=$REGISTRY_IP:5000 + +You need to retag the built image before so you can push it to Docker Registry: + + docker tag strimzi/kafka:latest-kafka-2.4.0-oauth $DOCKER_REG/strimzi/kafka:latest-kafka-2.4.0-oauth + docker push $DOCKER_REG/strimzi/kafka:latest-kafka-2.4.0-oauth + +Actually, Kubernetes Kind supports an even simpler option how to make an image available to Kubernetes: + + kind load docker-image strimzi/kafka:latest-kafka-2.4.0-oauth + +Deploying +--------- + +In order for the operator to use your Kafka image, you have to replace the Kafka image coordinates in `install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml` in your `strimzi-kafka-operator` project. + +This image is based on `strimzi/kafka:latest-kafka-2.4.0`, so we need to replace all occurrences of that with the proper coordinates to our image: + + sed -Ei 's#strimzi/kafka:latest-kafka-2.4.0#strimzi/kafka:latest-kafka-2.4.0-oauth#g' install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml + +You also have to push the image to the Docker Registry trusted by your Kubernetes cluster, and you need to adjust `050-Deployment-strimzi-cluster-operator.yaml` for changed coordinates due to that. + +For example: +``` +sed -Ei -e "s#(image|value): strimzi/([a-z0-9-]+):latest#\1: ${DOCKER_REG}/strimzi/\2:latest#" \ + -e "s#([0-9.]+)=strimzi/([a-zA-Z0-9-]+:[a-zA-Z0-9.-]+-kafka-[0-9.]+)#\1=${DOCKER_REG}/strimzi/\2#" \ + install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml +``` + +It's best to check the `050-Deployment-strimzi-cluster-operator.yaml` file manually to make sure everything is in order: + + less install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml + + +You can now deploy Strimzi Kafka Operator following instructions in [HACKING.md](../../../HACKING.md) + diff --git a/examples/docker/strimzi-kafka-image/pom.xml b/examples/docker/strimzi-kafka-image/pom.xml new file mode 100644 index 00000000..4cb8fe6f --- /dev/null +++ b/examples/docker/strimzi-kafka-image/pom.xml @@ -0,0 +1,71 @@ + + + 4.0.0 + + + io.strimzi.oauth.docker + kafka-oauth-docker-parent + ../../pom.xml + 1.0.0-SNAPSHOT + + + org.example + kafka-oauth-docker-strimzi-kafka + 1.0.0-SNAPSHOT + + pom + + + + + org.apache.maven.plugins + maven-dependency-plugin + + + copy + package + + copy + + + + + + + io.strimzi + kafka-oauth-client + + + io.strimzi + kafka-oauth-server + + + io.strimzi + kafka-oauth-common + + + io.strimzi + kafka-oauth-keycloak-authorizer + + + org.keycloak + keycloak-core + + + org.keycloak + keycloak-common + + + org.bouncycastle + bcprov-jdk15on + + + target/libs + false + true + + + + + diff --git a/examples/kubernetes/README.md b/examples/kubernetes/README.md new file mode 100644 index 00000000..5910e4d0 --- /dev/null +++ b/examples/kubernetes/README.md @@ -0,0 +1,121 @@ +Examples of Strimzi Kafka Cluster with OAuth +-------------------------------------------- + +Here are several examples of Kafka Cluster definitions for deployment with Strimzi Cluster Operator. +They assume Keycloak is used as an authorization server, with properly configured realms called 'demo', and 'authz'. + +* `keycloak.yaml` + + A Keycloak pod you can use to start an ephemeral instance of Keycloak. Any changes to realms will be lost when the pod shuts down. This is the first yaml you'll want to deploy. + +* `kafka-oauth-singe.yaml` + + A single node Kafka cluster using Apache Kafka 2.3.1 with OAuth 2 authentication using the 'demo' realm, and fast local signature validation (with keys loaded from the JWKS endpoint) for validating access tokens. + +* `kafka-oauth-single-2_4.yaml` + + Same as `kafka-oauth-single.yaml` except using Apache Kafka 2.4.0. + +* `kafka-oauth-single-introspect.yaml` + + A single node Kafka cluster using Apache Kafka 2.3.1 with OAuth 2 authentication using the `demo` realm, and introspection endpoint for access token validation. + +* `kafka-oauth-single-authz.yaml` + + A single node Kafka cluster using Apache Kafka 2.3.1 with OAuth 2 authentication using the `kafka-authz` realm, a fast local signature validation, and Keycloak Authorization Services for token-based authorization. + +* `kafka-oauth-single-2_4.authz.yaml` + + Same as `kafka-oauth-single-authz.yaml` except using Apache Kafka 2.4.0. + +### Deploying Keycloak and accessing the Keycloak Admin Console + +Before deploying any of the Kafka cluster definitions, you need to deploy a Keycloak instance, and configure the realms with the necessary client definitions. + +Deploy the Keycloak server: + + kubectl apply -f keycloak.yaml + +Wait for Keycloak to start up: + + kubectl get pod + kubectl logs $(kubectl get pod | grep keycloak | awk '{print $1}') + +In order to connect to Keycloak Admin Console you need an ip address and a port where it is listening. From the point of view of the Keycloak pod it is listening on port 8080 on all the interfaces. The `NodePort` service also exposes a port on the Kubernetes Node's IP: + + kubectl get svc | grep keycloak + KEYCLOAK_PORT=$(kubectl get svc | grep keycloak | awk -F '8080:' '{print $2}' | awk -F '/' '{print $1}') + echo Keycloak port: $KEYCLOAK_PORT + +The actual IP address and port to use in order to reach Keycloak Admin Console from your host machine depends on your Kubernetes installation. + + +#### Minishift + + KEYCLOAK_HOST=$(minishift ip) + KEYCLOAK_PORT=$(kubectl get svc | grep keycloak | awk -F '8080:' '{print $2}' | awk -F '/' '{print $1}') + echo http://$KEYCLOAK_HOST:$KEYCLOAK_PORT/auth/admin + +You can then open the printed URL and login with admin:admin. + + +#### Minikube + +You can connect directly to Kubernetes Node IP using a NodePort port: + + KEYCLOAK_HOST=$(minikube ip) + KEYCLOAK_PORT=$(kubectl get svc | grep keycloak | awk -F '8080:' '{print $2}' | awk -F '/' '{print $1}') + echo http://$KEYCLOAK_HOST:$KEYCLOAK_PORT/auth/admin + +You can then open the printed URL and login with admin:admin. + + +#### Kubernetes Kind + +In order to connect to Keycloak Admin Console you have to create a TCP tunnel: + + kubectl port-forward svc/keycloak 8080:8080 + +You can then open: http://localhost:8080/auth/admin and login with admin:admin. + + +### Importing example realms + +This step depends on your development environment because we have to build a custom docker image, and deploy it as a Kubernetes pod, for which we have to push it to the Docker Registry first. + +First we build the `keycloak-import` docker image: + + cd examples/docker/keycloak-import + docker build . -t strimzi/keycloak-import + +Then we tag and push it to the Docker Registry: + + docker tag strimzi/keycloak-import $REGISTRY_IP:$REGISTRY_PORT/strimzi/keycloak-import + docker push $REGISTRY_IP:$REGISTRY_PORT/strimzi/keycloak-import + +Here we assume we know the IP address (`$REGISTRY_IP`) of the docker container and the port (`$REGISTRY_PORT`) it's listening on, and that, if it is an insecure Docker Registry, the Docker Daemon has been configured to trust the insecure registry. We also assume that you have authenticated to the registry if that is required in your environment. And, very important, we also assume that this is either a public Docker Registry accessible to your Kubernetes deployment or that it's the internal Docker Registry used by your Kubernetes install. + +See [HACKING.md](../../HACKING.md) for more information on setting up the local development environment with all the pieces in place. + + +Now deploy it as a Kubernetes pod: + + kubectl run -ti --attach keycloak-import --image=$REGISTRY_IP:$REGISTRY_PORT/strimzi/keycloak-import + +The continer will perform the imports of realms into the Keycloak server, and exit. If you run `kubectl get pod` you'll see it CrashLoopBackOff because as soon as it's done, Kubernetes will restart the pod in the background, which will try to execute the same imports again, and fail. You'll also see errors in the Keycloak log, but as long as the initial realm import was successful, you can safely ignore them. + +Remove the `keycloak-import` deployment: + + kubectl delete deployment keycloak-import + + +### Deploying the Kafka cluster + +Assuming you have already installed Strimzi Kafka Operator, you can now simply deploy one of the `kafka-oatuh-*` yaml files. All examples are configured with OAuth2 for authentication. + +For example: + + kubectl apply -f kafka-oauth-single-authz.yaml + + + diff --git a/examples/kubernetes/kafka-oauth-single-2_4-authz.yaml b/examples/kubernetes/kafka-oauth-single-2_4-authz.yaml new file mode 100644 index 00000000..4706d1a8 --- /dev/null +++ b/examples/kubernetes/kafka-oauth-single-2_4-authz.yaml @@ -0,0 +1,49 @@ +apiVersion: kafka.strimzi.io/v1beta1 +kind: Kafka +metadata: + name: my-cluster +spec: + kafka: + version: 2.4.0 + replicas: 1 + listeners: + plain: + authentication: + type: oauth + validIssuerUri: http://keycloak:8080/auth/realms/kafka-authz + jwksEndpointUri: http://keycloak:8080/auth/realms/kafka-authz/protocol/openid-connect/certs + userNameClaim: preferred_username + tls: {} + authorization: + type: keycloak + clientId: kafka + tokenEndpointUri: http://keycloak:8080/auth/realms/kafka-authz/protocol/openid-connect/token + delegateToKafkaAcls: true + superUsers: + - User:service-account-kafka + logging: + type: inline + loggers: + log4j.logger.io.strimzi: "DEBUG" + config: + offsets.topic.replication.factor: 1 + transaction.state.log.replication.factor: 1 + transaction.state.log.min.isr: 1 + log.message.format.version: "2.4" + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 100Gi + deleteClaim: false + zookeeper: + replicas: 1 + storage: + type: persistent-claim + size: 100Gi + deleteClaim: false + entityOperator: + topicOperator: {} + userOperator: {} + diff --git a/examples/kubernetes/kafka-oauth-single-2_4.yaml b/examples/kubernetes/kafka-oauth-single-2_4.yaml new file mode 100644 index 00000000..e4a25066 --- /dev/null +++ b/examples/kubernetes/kafka-oauth-single-2_4.yaml @@ -0,0 +1,42 @@ +apiVersion: kafka.strimzi.io/v1beta1 +kind: Kafka +metadata: + name: my-cluster +spec: + kafka: + version: 2.4.0 + replicas: 1 + listeners: + plain: + authentication: + type: oauth + validIssuerUri: http://keycloak:8080/auth/realms/demo + jwksEndpointUri: http://keycloak:8080/auth/realms/demo/protocol/openid-connect/certs + userNameClaim: preferred_username + tls: {} + logging: + type: inline + loggers: + log4j.logger.io.strimzi: "DEBUG" + config: + offsets.topic.replication.factor: 1 + transaction.state.log.replication.factor: 1 + transaction.state.log.min.isr: 1 + log.message.format.version: "2.4" + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 100Gi + deleteClaim: false + zookeeper: + replicas: 1 + storage: + type: persistent-claim + size: 100Gi + deleteClaim: false + entityOperator: + topicOperator: {} + userOperator: {} + diff --git a/examples/kubernetes/kafka-oauth-single-authz.yaml b/examples/kubernetes/kafka-oauth-single-authz.yaml new file mode 100644 index 00000000..f9287c9e --- /dev/null +++ b/examples/kubernetes/kafka-oauth-single-authz.yaml @@ -0,0 +1,49 @@ +apiVersion: kafka.strimzi.io/v1beta1 +kind: Kafka +metadata: + name: my-cluster +spec: + kafka: + version: 2.3.1 + replicas: 1 + listeners: + plain: + authentication: + type: oauth + validIssuerUri: http://keycloak:8080/auth/realms/kafka-authz + jwksEndpointUri: http://keycloak:8080/auth/realms/kafka-authz/protocol/openid-connect/certs + userNameClaim: preferred_username + tls: {} + authorization: + type: keycloak + clientId: kafka + tokenEndpointUri: http://keycloak:8080/auth/realms/kafka-authz/protocol/openid-connect/token + delegateToKafkaAcls: true + superUsers: + - User:service-account-kafka + logging: + type: inline + loggers: + log4j.logger.io.strimzi: "DEBUG" + config: + offsets.topic.replication.factor: 1 + transaction.state.log.replication.factor: 1 + transaction.state.log.min.isr: 1 + log.message.format.version: "2.3" + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 100Gi + deleteClaim: false + zookeeper: + replicas: 1 + storage: + type: persistent-claim + size: 100Gi + deleteClaim: false + entityOperator: + topicOperator: {} + userOperator: {} + diff --git a/examples/kubernetes/kafka-oauth-single-introspect.yaml b/examples/kubernetes/kafka-oauth-single-introspect.yaml new file mode 100644 index 00000000..bfd76d05 --- /dev/null +++ b/examples/kubernetes/kafka-oauth-single-introspect.yaml @@ -0,0 +1,46 @@ +apiVersion: kafka.strimzi.io/v1beta1 +kind: Kafka +metadata: + name: my-cluster +spec: + kafka: + version: 2.3.1 + replicas: 1 + listeners: + plain: + authentication: + type: oauth + validIssuerUri: http://keycloak:8080/auth/realms/demo + introspectionEndpointUri: http://keycloak:8080/auth/realms/demo/protocol/openid-connect/token/introspect + userNameClaim: preferred_username + clientId: kafka-broker + clientSecret: + secretName: my-cluster-oauth-client-secret + key: clientSecret + tls: {} + logging: + type: inline + loggers: + log4j.logger.io.strimzi: "DEBUG" + config: + offsets.topic.replication.factor: 1 + transaction.state.log.replication.factor: 1 + transaction.state.log.min.isr: 1 + log.message.format.version: "2.3" + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 100Gi + deleteClaim: false + zookeeper: + replicas: 1 + storage: + type: persistent-claim + size: 100Gi + deleteClaim: false + entityOperator: + topicOperator: {} + userOperator: {} + diff --git a/examples/kubernetes/kafka-oauth-single.yaml b/examples/kubernetes/kafka-oauth-single.yaml new file mode 100644 index 00000000..216555c6 --- /dev/null +++ b/examples/kubernetes/kafka-oauth-single.yaml @@ -0,0 +1,42 @@ +apiVersion: kafka.strimzi.io/v1beta1 +kind: Kafka +metadata: + name: my-cluster +spec: + kafka: + version: 2.3.1 + replicas: 1 + listeners: + plain: + authentication: + type: oauth + validIssuerUri: http://keycloak:8080/auth/realms/demo + jwksEndpointUri: http://keycloak:8080/auth/realms/demo/protocol/openid-connect/certs + userNameClaim: preferred_username + tls: {} + logging: + type: inline + loggers: + log4j.logger.io.strimzi: "DEBUG" + config: + offsets.topic.replication.factor: 1 + transaction.state.log.replication.factor: 1 + transaction.state.log.min.isr: 1 + log.message.format.version: "2.3" + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 100Gi + deleteClaim: false + zookeeper: + replicas: 1 + storage: + type: persistent-claim + size: 100Gi + deleteClaim: false + entityOperator: + topicOperator: {} + userOperator: {} + diff --git a/examples/kubernetes/keycloak.yaml b/examples/kubernetes/keycloak.yaml new file mode 100644 index 00000000..541183a5 --- /dev/null +++ b/examples/kubernetes/keycloak.yaml @@ -0,0 +1,56 @@ +apiVersion: v1 +kind: Service +metadata: + name: keycloak + labels: + app: keycloak +spec: + ports: + - name: http + port: 8080 + targetPort: 8080 + - name: https + port: 8443 + targetPort: 8443 + selector: + app: keycloak + type: NodePort + +--- + +apiVersion: apps/v1 +kind: Deployment +metadata: + name: keycloak +spec: + replicas: 1 + selector: + matchLabels: + app: keycloak + template: + metadata: + labels: + app: keycloak + spec: + containers: + - name: keycloak + image: jboss/keycloak + args: + - "-b 0.0.0.0" + - "-Dkeycloak.profile.feature.upload_scripts=enabled" + env: + - name: KEYCLOAK_USER + value: admin + - name: KEYCLOAK_PASSWORD + value: admin + - name: PROXY_ADDRESS_FORWARDING + value: "true" + ports: + - name: http + containerPort: 8080 + - name: https + containerPort: 8443 + readinessProbe: + httpGet: + path: /auth/realms/master + port: 8080