Ap4k is a collection of Java annotations and processors for generating Kubernetes/OpenShift manifests at compile time.
It makes generating Kubernetes manifests as easy as adding: @KubernetesApplication on your main class.
Stop wasting time editing xml, json and yml and customize the kubernetes manifests using annotations.
- Generates manifest via annotation processing
- Customize manifests using annotations
- Kubernetes
- labels
- annotations
- environment variables
- mounts
- ports and services
- jvm options
- init containers
- sidecars
- Kubernetes
- Annotationless mode for known framework
- Spring Boot
- OpenShift
- image streams
- build configurations
- Prometheus
- Service Catalog
- service instances
- inject bindings into pods
- Istio
- proxy injection
- Component CRD
- Build tool independent (works with maven, gradle, bazel and so on)
- Rich framework integration
- Port, Service and Probe auto configuration
- Spring Boot
- Thorntail
- Micronaut
- Port, Service and Probe auto configuration
- Integration with external generators
- Rich set of examples
- Register hooks for triggering builds and deployment
- Build hooks
- Docker build hook
- Source to image build hook
- Build hooks
- junit5 integration testing extension
The are tons of tools out there for scaffolding / generating kubernetes manifests. Sooner or later these manifests will require customization. Handcrafting is not an appealing option. Using external tools, is often too generic. Using build tool extensions and adding configuration via xml, groovy etc is a step forward, but still not optimal.
Annotation processing has quite a few advantages over external tools or build tool extensions:
- Configuration is validated by the compiler.
- Leverages tools like the IDE for writing type safe config (checking, completion etc).
- Works with all build tools.
- Can "react" to annotations provided by the framework.
To start using this project you just need to add one of the provided annotations to your project.
@KubernetesApplication can be added to your project like:
import io.ap4k.kubernetes.annotaion.KubernetesApplication;
@KubernetesApplication
public class Main {
public static void main(String[] args) {
//Your application code goes here.
}
}
When the project gets compiled, the annotation will trigger the generation of a Deployment
in both json and yml that
will end up under 'target/classes/META-INF/apk'.
The annotation comes with a lot of parameters, which can be used in order to customize the Deployment
and/or trigger
the generations of addition resources, like Service
and Ingress
.
This module can be added to the project using:
<dependency>
<groupId>io.ap4k</groupId>
<artifactId>kubernetes-annotations</artifactId>
<version>${project.version}</version>
</dependency>
So where did the generated Deployment
gets its name, docker images etc from?
Everything can be customized via annotation parameters and system properties.
On top of that lightweight
integration with build tools is provided in order to reduce duplication.
Lightweight integration with build tools, refers to reading information from the build tool config without bringing in the build tool itself into the classpath. The information read from the build tool is limited to:
- name / artifactId
- version
- output file
For example in the case of maven it refers to parsing the pom.xml with DOM in order to fetch the artifactId and version.
Supported build tools:
- maven
- gradle
- sbt
- bazel
For all other build tools, the name and version need to be provided via the core annotations:
@KubernetesApplication(name = "my-app", version="1.1.0.Final")
public class Main {
}
or
@OpenshiftApplication(name = "my-app", version="1.1.0.Final")
public class Main {
}
and so on...
The information read from the build tool, is added to all resources as labels (name, version). They are also used to name images, containers, deployments, services etc.
For example for a gradle app, with the following gradle.properties
:
name = my-gradle-app
version = 1.0.0
The following deployment will be generated:
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "kubernetes-example"
spec:
replicas: 1
selector:
matchLabels:
app: "my-gradle-app"
version: "1.0-SNAPSHOT"
group: "default"
template:
metadata:
labels:
app: "my-gradle-app"
version: "1.0-SNAPSHOT"
group: "default"
spec:
containers:
- env:
- name: "KUBERNETES_NAMESPACE"
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
image: "default/my-gradle-app:1.0-SNAPSHOT"
imagePullPolicy: "IfNotPresent"
name: "my-gradle-app"
The output file name may be used in certain cases, to set the value of JAVA_APP_JAR
an environment variable that points to the build jar.
To add extra ports to the container, you can add one or more @Port
into your @KubernetesApplication :
import io.ap4k.kubernetes.annotation.Env;
import io.ap4k.kubernetes.annotation.KubernetesApplication;
@KubernetesApplication(ports = @Port(name = "web", containerPort = 8080))
public class Main {
public static void main(String[] args) {
//Your code goes here
}
}
This will trigger the addition of a container port to the Deployment
but also will trigger the generation of a Service
resource.
Note: This doesn't need to be done explicitly, if the application framework is detected and support, ports can be extracted from there (see below).
To add extra environment variables to the container, you can add one or more @EnvVar
into your @KubernetesApplication :
import io.ap4k.kubernetes.annotation.Env;
import io.ap4k.kubernetes.annotation.KubernetesApplication;
@KubernetesApplication(envVars = @Env(name = "key1", value = "var1"))
public class Main {
public static void main(String[] args) {
//Your code goes here
}
}
Additional options are provided for adding environment variables from fields, config maps and secrets.
To define volumes and mounts for your application, you can use something like:
import io.ap4k.kubernetes.annotation.Port;
import io.ap4k.kubernetes.annotation.Mount;
import io.ap4k.kubernetes.annotation.PersistentVolumeClaimVolume;
import io.ap4k.kubernetes.annotation.KubernetesApplication;
@KubernetesApplication(ports = @Port(name = "http", containerPort = 8080),
pvcVolumes = @PersistentVolumeClaimVolume(volumeName = "mysql-volume", claimName = "mysql-pvc"),
mounts = @Mount(name = "mysql-volume", path = "/var/lib/mysql")
)
public class Main {
public static void main(String[] args) {
//Your code goes here
}
}
Currently the supported annotations for specifying volumes are:
- @PersistentVolumeClaimVolume
- @SecretVolume
- @ConfigMapVolume
- @AwsElasticBlockStoreVolume
- @AzureDiskVolume
- @AzureFileVolume
It's common to pass the JVM options in the manifests using the JAVA_OPTS
environment variable of the application container.
This is something complex as it usually difficult to remember all options by heart and thus its error prone.
The worst part is that you don't realize the mistake until its TOO late.
Ap4k provides a way to manage those options using the @JvmOption
annotation, which is included in the options-annotations
.
import io.ap4k.options.annotation.JvmOptions
import io.ap4k.options.annotation.GarbageCollector;
import io.ap4k.kubernetes.annotation.KubernetesApplication;
@KubernetesApplication
@JvmOptions(server=true, xmx=1024, preferIpv4Stack=true, gc=GarbageCollector.SerialGC)
public class Main {
public static void main(String[] args) {
//Your code goes here
}
}
This module can be added to the project using:
<dependency>
<groupId>io.ap4k</groupId>
<artifactId>option-annotations</artifactId>
<version>${project.version}</version>
</dependency>
Note: The module is included in all starters.
If for any reason the application requires the use of init containers, they can be easily defined using the initContainer
property, as demonstrated below.
import io.ap4k.kubernetes.annotation.Container;
import io.ap4k.kubernetes.annotation.KubernetesApplication;
@KubernetesApplication(initContainers = @Container(image="foo/bar:latest", command="foo"))
public class Main {
public static void main(String[] args) {
//Your code goes here
}
}
The @Container supports the following fields:
- Image
- Image Pull Policy
- Commands
- Arguments
- Environment Variables
- Mounts
- Probes
Similarly to init containers support for sidecars is also provided using the sidecars
property. For example:
import io.ap4k.kubernetes.annotation.Container;
import io.ap4k.kubernetes.annotation.KubernetesApplication;
@KubernetesApplication(sidecars = @Container(image="jaegertracing/jaeger-agent",
args="--collector.host-port=jaeger-collector.jaeger-infra.svc:14267"))
public class Main {
public static void main(String[] args) {
//Your code goes here
}
}
As in the case of init containers the @Container supports the following fields:
- Image
- Image Pull Policy
- Commands
- Arguments
- Environment Variables
- Mounts
- Probes
This module can be added to the project using:
<dependency>
<groupId>io.ap4k</groupId>
<artifactId>kubernetes-annotations</artifactId>
<version>${project.version}</version>
</dependency>
This module provides two new annotations:
- @OpenshiftApplication
@OpenshiftApplication works exactly like @KubernetesApplication , but will generate resources in a file name openshift.yml
/ openshift.json
instead.
Also instead of creating a Deployment
it will create a DeploymentConfig
.
NOTE: A project can use both @KubernetesApplication and @OpenshiftApplication. If both the kubernetes and openshift annotation processors are present both kubernetes and openshift resources will be generated.
This module can be added to the project using:
<dependency>
<groupId>io.ap4k</groupId>
<artifactId>openshift-annotations</artifactId>
<version>${project.version}</version>
</dependency>
Out of the box resources for s2i will be generated.
- ImageStream
- builder
- target
- BuildConfig
Here's an example:
import io.ap4k.openshift.annotation.OpenshiftApplication;
@OpenshiftApplication(name = "doc-example")
public class Main {
public static void main(String[] args) {
//Your code goes here
}
}
The generated BuildConfig
will be a binary config. The actual build can be triggered from the command line with something like:
oc start-build doc-example --from-dir=./target --follow
NOTE: In the example above we explicitly set a name for our application, and we refernced that name from the cli.
If the name was implicitly created the user would have to figure the name out before triggering the build. This could be
done either by oc get bc
or by knowing the conventions used to read names from build tool config (e.g. if maven then name the artifactId).
- openshift example
- source to image example
- spring boot on openshift example
- spring boot with groovy on openshift example
- spring boot with gradle on openshift example
The prometheus annotation processor provides annotations for generating prometheus related resources. In particular it can generate ServiceMonitor which are used by the Prometheus Operator in order to configure prometheus to collect metrics from the target application.
This is done with the use of @EnableServiceMonitor annotation.
Here's an example:
import io.ap4k.kubernetes.annotation.KubernentesApplication;
import io.ap4k.prometheus.annotation.EnableServiceMonitor;
@KubernetesApplication
@EnableServiceMonitor(port = "http", path="/prometheus", interval=20)
public class Main {
public static void main(String[] args) {
//Your code goes here
}
}
The annotation processor, will automatically configure the required selector and generate the ServiceMonitor.
Note: Some of the framework integration modules, may further decorate the ServiceMonitor with framework specific configuration.
For example, the Spring Boot module will decorate the monitor with the Spring Boot specific path, which is /actuator/prometheus
.
The jaeger annotation processor provides annotations for injecting the jaeger-agent into the application pod.
Most of the work is done with the use of the @EnableJaegerAgent annotation.
When the jaeger operator is available, you set the operatorEnabled
property to true
.
The annotation processor will automicatlly set the required annotations to the generated deployment, so that the jaeger operator can inject the jaeger-agent.
Here's an example:
import io.ap4k.kubernetes.annotation.KubernentesApplication;
import io.ap4k.jaeger.annotation.EnableJaegerAgent;
@KubernetesApplication
@EnableJaegerAgent(operatorEnabled="true")
public class Main {
public static void main(String[] args) {
//Your code goes here
}
}
For the cases, where the operator is not present, you can use the @EnableJaegerAgent to manually configure the sidecar.
import io.ap4k.kubernetes.annotation.KubernentesApplication;
import io.ap4k.jaeger.annotation.EnableJaegerAgent;
@KubernetesApplication
@EnableJaegerAgent
public class Main {
public static void main(String[] args) {
//Your code goes here
}
}
The services catalog annotation processor is can be used in order to create services catalog resources for:
- creating services instances
- binding to services
- injecting binding info into the container
Here's an example:
import io.ap4k.kubernetes.annotation.KubernetesApplication;
import io.ap4k.servicecatalog.annotation.ServiceCatalogInstance;
import io.ap4k.servicecatalog.annotation.ServiceCatalog;
@KubernetesApplication
@ServiceCatalog(instances =
@ServiceCatalogInstance(name = "mysql-instance", serviceClass = "apb-mysql", servicePlan = "default")
)
public class Main {
public static void main(String[] args) {
//Your code goes here
}
}
The @ServiceCatalogInstance
annotation will trigger the generation of a ServiceInstance
and a ServiceBinding
resource.
It will also decorate any Pod
, Deployment
, DeploymentConfig
and so on with additional environment variables containing the binding information.
This module can be added to the project using:
<dependency>
<groupId>io.ap4k</groupId>
<artifactId>servicecatalog-annotations</artifactId>
<version>${project.version}</version>
</dependency>
The istio annotation processor can be used to automatically inject the istio sidecar to the generated resources. For example:
import io.ap4k.kubernetes.annotation.KubernetesApplication;
import io.ap4k.istio.annotation.Istio;
@Istio
@KubernetesApplication
public class Main {
public static void main(String[] args) {
//Your code goes here
}
}
This module can be added to the project using:
<dependency>
<groupId>io.ap4k</groupId>
<artifactId>istio-annotations</artifactId>
<version>${project.version}</version>
</dependency>
The component CRD aims on abstracting kubernetes/OpenShift resources and simplify the config, design of an application. See the following project to get more buildInfo about how the structure, syntax of a Component (runtime, services, links) is defined. To play with a Components CRD and its operator running on the cloud platform and able to generate the kubernetes resources or manage them, then look to this project. This module provides limited/early support of the component operator.
By adding the @CompositeApplication
annotation to the application, the generation of `target/classes/META-INF/apk/component.yml' is triggered.
The content of the component descriptor will be determined by the existing config provided by annotations like:
- @KubernetesApplication
- @ServiceCatalog
- and more...
For example, the following code:
import io.ap4k.kubernetes.annotation.KubernetesApplication;
import io.ap4k.component.annotation.CompositeApplication;
import io.ap4k.servicecatalog.annotation.ServiceCatalog;
import io.ap4k.servicecatalog.annotation.ServiceCatalogInstance;
@KubernetesApplication
@ServiceCatalog(instances = @ServiceCatalogInstance(name = "mysql-instance", serviceClass = "apb-mysql", servicePlan = "default", secretName="mysql-secret"))
@CompositeApplication
public class Main {
public static void main(String[] args) {
//Your code goes here
}
}
Will trigger the creation of the following component:
apiVersion: "v1beta1"
kind: "Component"
metadata:
name: ""
spec:
deploymentMode: "innerloop"
services:
- name: "mysql-instance"
class: "apb-mysql"
plan: "default"
secretName: "mysql-secret"
This module can be added to the project using:
<dependency>
<groupId>io.ap4k</groupId>
<artifactId>component-annotations</artifactId>
<version>${project.version}</version>
</dependency>
The @EnableApplicationResource enables the generation of the Application
custom resource, that is defined as part of https://github.com/kubernetes-sigs/application.
To use this annotation, one needs:
<dependency>
<groupId>io.ap4k</groupId>
<artifactId>application-annotations</artifactId>
<version>${project.version}</version>
</dependency>
And then its just a matter of specifying:
import io.ap4k.kubernetes.annotation.KubernetesApplication;
import io.ap4k.application.annotation.EnableApplicationResource;
@KubernetesApplication
@EnableApplicationResource(icons=@Icon(src="url/to/icon"), owners=@Contact(name="John Doe", email="[email protected]"))
public class Main {
public static void main(String[] args) {
//Your code goes here
}
}
Along we the resources that ap4k usually generates, there will be also an Application
custom resource.
Framework integration modules are provided that we are able to detect framework annotations and adapt to the framework (e.g. expose ports).
The frameworks supported so far:
- Spring Boot
- Thorntail (or any framework using jaxrs, jaxws annotations)
- Micronaut
With spring boot its suggested to start with one of the provided starters:
<dependency>
<groupId>io.ap4k</groupId>
<artifactId>kubernetes-spring-starter</artifactId>
<version>${project.version}</version>
</dependency>
Or if you are on openshift:
<dependency>
<groupId>io.ap4k</groupId>
<artifactId>openshfit-spring-starter</artifactId>
<version>${project.version}</version>
</dependency>
For spring boot application all you need to do, is adding one of the starters to the classpath. No need to specify an additonal annotation. This provides the fastest way to get started using ap4k with spring boot.
Note: Still, if you need to customize the generted manifests, you still have to use annotations.
In future releases, it should be possible to fully customize the manifests just by using application.properties
.
Apart from the core feature, which is resource generation, there are a couple of experimental features that do add to the developer experience.
These features have to do with things like building, deploying and testing.
Ap4k does not generate Dockerfiles, neither it provides internal support for performing docker or s2i builds.
It does however allow the user to hook external tools (e.g. the docker
or oc
) to trigger container image builds after the end of compilation.
So, at the moment as an experimental feature the following hooks are provided:
- docker build hook (requires docker binary, triggered with
-Dap4k.build=true
) - docker push hook (requires docker binary, triggered with
-Dap4k.push=true
) - openshift s2i build hook (requires oc binary, triggered with
-Dap4k.deploy=true
)
This hook will just trigger a docker build, using an existing Dockerfile at the root of the project. It will not generate or customize the docker build in anyway.
To enable the docker build hook you need:
- a
Dockerfile
in the project/module root - the
docker
binary configured to point the docker daemon of your kubernetes environment.
To trigger the hook, you need to pass -Dap4k.build=true
as an argument to the build, for example:
mvn clean install -Dap4k.build=true
or if you are using gradle:
gradle build -Dap4k.build=true
When push is enabled, the registry can be specified as part of the annotation, or via system properties. Here's an example via annotation configuration:
@EnableDockerBuild(registry="quay.io")
public class Main {
}
And here's how it can be done via build properties (system properties):
mvn clean install -Dap4k.docker.registry=quay.io -Dap4k.push=true
Note: Ap4k will NOT push images on its own. It will delegate to the docker
binary. So the user needs to make sure
beforehand that is logged in and has taken all necessary actions for a docker push
to work.
This hook will just trigger an s2i binary build, that will pass the output folder as an input to the build
To enable the docker build hook you need:
- the
openshift-annotations
module (already included in all openshift starter modules) - the
oc
binary configured to point the docker daemon of your kubernetes environment.
Finally, to trigger the hook, you need to pass -Dap4k.build=true
as an argument to the build, for example:
mvn clean install -Dap4k.build=true
or if you are using gradle:
gradle build -Dap4k.build=true
Ap4k provides two junit5 extensions for:
- Kubernetes
- Openshift
These extensions are ap4k
aware and can read generated resources and configuration, in order to manage end to end
tests
for the annotated applications.
- Environment conditions
- Container builds
- Apply generated manifests to test environment
- Inject test with:
- client
- application pod
The kubernetes extension can be used by adding the following dependency:
<dependency>
<groupId>io.ap4k</groupId>
<artifactId>kubernetes-junit</artifactId>
<version>${project.version}</version>
</dependency>
This dependency gives access to @KubernetesIntegrationTest which is what enables the extension for your tests.
By adding the annotation to your test class the following things will happen:
- The extension will check if a kubernetes cluster is available (if not tests will be skipped).
- If
@EnableDockerBuild
is present in the project, a docker build will be triggered. - All generated manifests will be applied.
- Will wait until applied resources are ready.
- Dependencies will be injected (e.g. KubernetesClient, Pod etc)
- Test will run
- Applied resources will be removed.
Supported items for injection:
- KubernetesClient
- Pod (the application pod)
- KubernetesList (the list with all generated resources)
To inject one of this you need a field in the code annotated with @Inject.
For example:
@Inject
KubernetesClient client;
When injecting a Pod, its likely that we need to specify the pod name. Since the pod name is not known in advance, we can use the deployment name instead.
If the deployment is named hello-world
then you can do something like:
@Inject
@Named("hello-world")
Pod pod;
Note: It is highly recommended to also add maven-failsafe-plugin
configuration so that integration tests only run in the integration-test
phase.
This is important since in the test
phase the application is not packaged. Here's an example of how it you can configure the project:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>${version.maven-failsafe-plugin}</version>
<executions>
<execution>
<goals>
<goal>integration-test</goal>
<goal>verify</goal>
</goals>
<phase>integration-test</phase>
<configuration>
<includes>
<include>**/*IT.class</include>
</includes>
</configuration>
</execution>
</executions>
</plugin>
Similarly to using the kubernetes junit extension you can use the extension for OpenShift, by adding @OpenshiftIntegrationTest. To use that you need to add:
<dependency>
<groupId>io.ap4k</groupId>
<artifactId>openshift-junit</artifactId>
<version>${project.version}</version>
</dependency>
By adding the annotation to your test class the following things will happen:
- The extension will check if a kubernetes cluster is available (if not tests will be skipped).
- A docker build will be triggered.
- All generated manifests will be applied.
- Will wait until applied resources are ready.
- Dependencies will be injected (e.g. KubernetesClient, Pod etc)
- Test will run
- Applied resources will be removed.
- spring boot on openshift example
- spring boot with groovy on openshift example
- spring boot with gradle on openshift example
No matter how good a generator/scaffolding tool is, its often desirable to handcraft part of it. Other times it might be desirable to combine different tools together (e.g. to generate the manifests using fmp but customize them via ap4k annotations)
No matter what the reason is, ap4k supports working on existing resources and decorating them based on the provided annotation configuration. This is as simple as letting ap4k know where to read the existing manifests and where to store the generated ones. By adding the @GeneratorOptions.
The fabric8-maven-plugin can be used to package applications for kubernetes and openshift. It also supports generating manifests.
A user might choose to build images using fmp, but customize them using ap4k
annotations instead of xml.
An example could be to expose an additional port:
This can by done by configuring ap4k to read the fmp generated manifests from META-INF/fabric8
which is where fmp stores them and save them back there once decoration is done.
@GeneratorOptions(inputPath = "META-INF/fabric8", outputPath = "META-INF/fabric8")
@KubernetesApplication(port = @Port(name="srv", containerPort=8181)
public class Main {
...
}
By all means please do! We love contributions! Docs, Bug fixes, New features ... everything is important!
Make sure you take a look at contributor guidelines. Also, it can be useful to have a look at the ap4k design.