Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(security): updates content for securing kafka access #10071

Merged
merged 2 commits into from
May 11, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,14 @@ include::../../modules/deploying/proc-deploy-example-clients.adoc[leveloffset=+1
include::../../modules/overview/con-configuration-points-listeners.adoc[leveloffset=+1]
//listener naming conventions
include::../../modules/overview/con-configuration-points-listener-names.adoc[leveloffset=+1]
//how to set up external clients that can access and use the deployment
include::../../modules/deploying/proc-deploy-setup-external-clients.adoc[leveloffset=+1]
//access through external listeners
include::../../modules/security/proc-accessing-kafka-using-nodeports.adoc[leveloffset=+1]
include::../../modules/security/proc-accessing-kafka-using-loadbalancers.adoc[leveloffset=+1]
//Kubernetes only
ifdef::Section[]
include::../../modules/security/proc-accessing-kafka-using-ingress.adoc[leveloffset=+1]
endif::Section[]
//openshift only
include::../../modules/security/proc-accessing-kafka-using-routes.adoc[leveloffset=+1]
//discover internal bootstrap service and Bridge
include::../../modules/deploying/con-service-discovery.adoc[leveloffset=+1]
11 changes: 0 additions & 11 deletions documentation/assemblies/oauth/assembly-oauth-authentication.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,6 @@

[role="_abstract"]
Strimzi supports the use of link:https://oauth.net/2/[OAuth 2.0 authentication^] using the _OAUTHBEARER_ and _PLAIN_ mechanisms.

OAuth 2.0 enables standardized token-based authentication and authorization between applications, using a central authorization server to issue tokens that grant limited access to resources.

Kafka brokers and clients both need to be configured to use OAuth 2.0.
You can configure OAuth 2.0 authentication, then xref:assembly-oauth-authorization_{context}[OAuth 2.0 authorization].

[NOTE]
====
OAuth 2.0 authentication can be used in conjunction with xref:con-securing-kafka-authorization-str[Kafka authorization].
====

Using OAuth 2.0 authentication, application clients can access resources on application servers (called _resource servers_) without exposing account credentials.

The application client passes an access token as a means of authenticating, which application servers can also use to determine the level of access to grant.
Expand Down
15 changes: 15 additions & 0 deletions documentation/assemblies/security/assembly-oauth-security.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
[id='assembly-oauth-security-{context}']
= Enabling OAuth 2.0 token-based access

[role="_abstract"]
Strimzi supports OAuth 2.0 for securing Kafka clusters by integrating with an OAUth 2.0 authorization server.
Kafka brokers and clients both need to be configured to use OAuth 2.0.

OAuth 2.0 enables standardized token-based authentication and authorization between applications, using a central authorization server to issue tokens that grant limited access to resources.
You can define specific scopes for fine-grained access control.
Scopes correspond to different levels of access to Kafka topics or operations within the cluster.
OAuth 2.0 also supports single sign-on and integration with identity providers.

//oauth options
include::../oauth/assembly-oauth-authentication.adoc[leveloffset=+1]
include::../oauth/assembly-oauth-authorization.adoc[leveloffset=+1]
70 changes: 39 additions & 31 deletions documentation/assemblies/security/assembly-securing-access.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,48 +3,56 @@
// configuring/configuring.adoc

[id='assembly-securing-access-{context}']
= Securing access to Kafka
= Securing access to a Kafka cluster

[role="_abstract"]
Secure your Kafka cluster by managing the access a client has to Kafka brokers.
A secure connection between Kafka brokers and clients can encompass the following:
Secure connections by configuring Kafka and Kafka users.
Through configuration, you can implement encryption, authentication, and authorization mechanisms.

* Encryption for data exchange
* Authentication to prove identity
* Authorization to allow or decline actions executed by users
.Kafka configuration

In Strimzi, securing a connection involves configuring listeners and user accounts:
To establish secure access to Kafka, configure the `Kafka` resource to set up the following configurations based on your specific requirements:

Listener configuration:: Use the `Kafka` resource to configure listeners for client connections to Kafka brokers.
Listeners define how clients authenticate, such as using mTLS, SCRAM-SHA-512, OAuth 2.0, or custom authentication methods.
To enhance security, configure TLS encryption to secure communication between Kafka brokers and clients.
You can further secure TLS-based communication by specifying the supported TLS versions and cipher suites in the Kafka broker configuration.
+
For an added layer of protection, use the `Kafka` resource to specify authorization methods for the Kafka cluster, such as simple, OAuth 2.0, OPA, or custom authorization.
* Listeners with specified authentication types to define how clients authenticate
** TLS encryption for communication between Kafka and clients
** Supported TLS versions and cipher suites for additional security
* Authorization for the entire Kafka cluster
* Network policies for restricting access
* Super users for unconstrained access to brokers

User accounts:: Set up user accounts and credentials with `KafkaUser` resources in Strimzi.
Users represent your clients and determine how they should authenticate and authorize with the Kafka cluster.
The authentication and authorization mechanisms specified in the user configuration must match the Kafka configuration.
Additionally, define Access Control Lists (ACLs) to control user access to specific topics and actions for more fine-grained authorization.
To further enhance security, specify user quotas to limit client access to Kafka brokers based on byte rates or CPU utilization.
+
You can also add producer or consumer configuration to your clients if you wish to limit the TLS versions and cipher suites they use.
The configuration on the clients must only use protocols and cipher suites that are enabled on the broker.
Authentication is configured independently for each listener, while authorization is set up for the whole Kafka cluster.

NOTE: If you are using an OAuth 2.0 to manage client access, user authentication and authorization credentials are managed through the authorization server.
For more information on access configuration for Kafka, see the link:{BookURLConfiguring}#type-Kafka-reference[`Kafka` schema reference^] and link:{BookURLConfiguring}#type-GenericKafkaListener-reference[`GenericKafkaListener` schema reference^].

Strimzi operators automate the configuration process and create the certificates required for authentication.
The Cluster Operator automatically sets up TLS certificates for data encryption and authentication within your cluster.
.User (client-side) configuration

//Config options for securing Kafka
include::assembly-securing-kafka-brokers.adoc[leveloffset=+1]
To enable secure client access to Kafka, configure `KafkaUser` resources.
These resources represent clients and determine how they authenticate and authorize with the Kafka cluster.

Configure the `KafkaUser` resource to set up the following configurations based on your specific requirements:

* Authentication that must match the enabled listener authentication
** Supported TLS versions and cipher suites that must match the Kafka configuration
* Authorization that must match the enabled Kafka authorization
* Access Control Lists (ACLs) for fine-grained control over user access to topics and actions
PaulRMellor marked this conversation as resolved.
Show resolved Hide resolved
* Quotas to limit client access based on byte rates or CPU utilization

The User Operator creates the user representing the client and the security credentials used for client authentication, based on the chosen authentication type.

For more information on access configuration for users, see the link:{BookURLConfiguring}#type-KafkaUser-reference[`KafkaUser` schema reference^].

//listener authn config
include::../../modules/security/con-securing-kafka-authentication.adoc[leveloffset=+1]
include::../../modules/security/proc-restricting-access-to-listeners-using-network-policies.adoc[leveloffset=+2]
include::../../modules/security/proc-installing-certs-per-listener.adoc[leveloffset=+2]
include::../../modules/security/ref-alternative-subjects-certs-for-listeners.adoc[leveloffset=+2]

//Kafka authz config
include::../../modules/security/con-securing-kafka-authorization.adoc[leveloffset=+1]

//Config options for clients
include::assembly-securing-kafka-clients.adoc[leveloffset=+1]

//Config to secure access
include::assembly-securing-kafka.adoc[leveloffset=+1]
//client set up example with kafka and user config
include::../../modules/deploying/proc-deploy-setup-external-clients.adoc[leveloffset=+1]

//oauth options
include::../oauth/assembly-oauth-authentication.adoc[leveloffset=+1]
include::../oauth/assembly-oauth-authorization.adoc[leveloffset=+1]

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,13 @@
// assembly-securing-access.adoc

[id='assembly-securing-kafka-clients-{context}']
= Security options for Kafka clients
= Configuring user (client-side) security mechanisms

[role="_abstract"]
Use the `KafkaUser` resource to configure the authentication mechanism, authorization mechanism, and access rights for Kafka clients.
In terms of configuring security, clients are represented as users.

You can authenticate and authorize user access to Kafka brokers.
Authentication permits access, and authorization constrains the access to permissible actions.
When configuring client-side security mechanisms, clients are represented as users.
PaulRMellor marked this conversation as resolved.
Show resolved Hide resolved
Use the `KafkaUser` resource to configure the authentication, authorization, and access rights for Kafka clients.

Authentication permits user access, and authorization constrains user access to permissible actions.
You can also create _super users_ that have unconstrained access to Kafka brokers.

The authentication and authorization mechanisms must match the xref:proc-securing-kafka-{context}[specification for the listener used to access the Kafka brokers].
Expand All @@ -20,4 +18,5 @@ For more information on configuring a `KafkaUser` resource to access Kafka broke

include::../../modules/security/con-securing-client-labels.adoc[leveloffset=+1]
include::../../modules/security/con-securing-client-authentication.adoc[leveloffset=+1]
include::../../modules/security/con-securing-client-authorization.adoc[leveloffset=+1]
include::../../modules/security/con-securing-client-authorization.adoc[leveloffset=+1]
include::../../modules/security/con-configuring-client-quotas.adoc[leveloffset=+1]
59 changes: 0 additions & 59 deletions documentation/assemblies/security/assembly-securing-kafka.adoc

This file was deleted.

2 changes: 2 additions & 0 deletions documentation/deploying/deploying.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,8 @@ include::assemblies/operators/assembly-using-the-user-operator.adoc[leveloffset=
include::assemblies/deploying/assembly-deploy-client-access.adoc[leveloffset=+1]
//Securing the deployment
include::assemblies/security/assembly-securing-access.adoc[leveloffset=+1]
//using OAuth
include::assemblies/security/assembly-oauth-security.adoc[leveloffset=+1]
//managing tls certificates
include::assemblies/security/assembly-security.adoc[leveloffset=+1]
//security context for all pods
Expand Down
10 changes: 4 additions & 6 deletions documentation/modules/deploying/con-service-discovery.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,18 @@
// managing/assembly-management-tasks.adoc

[id='proc-add-service-discovery-{context}']
= Returning connection details for services
= Discovering connection details for clients

[role="_abstract"]
Service discovery makes it easier for client applications running in the same Kubernetes cluster as Strimzi to interact with a Kafka cluster.

A _service discovery_ label and annotation are generated for services used to access the Kafka cluster:
A service discovery label and annotation are created for the following services:

* Internal Kafka bootstrap service
* Kafka Bridge service

The label helps to make the service discoverable, while the annotation provides connection details for client applications to establish connections.

The service discovery label, `strimzi.io/discovery`, is set as `true` for the `Service` resources.
The service discovery annotation has the same key, providing connection details in JSON format for each service.
Service discovery label:: The service discovery label, `strimzi.io/discovery`, is set to `true` for `Service` resources to make them discoverable for client connections.
Service discovery annotation:: The service discovery annotation provides connection details in JSON format for each service for client applications to use to establish connections.

.Example internal Kafka bootstrap service
[source,yaml,subs="attributes+"]
Expand Down
24 changes: 17 additions & 7 deletions documentation/modules/deploying/proc-deploy-example-clients.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,10 @@
= Deploying example clients

[role="_abstract"]
Deploy example producer and consumer clients to send and receive messages.
You can use these clients to verify a deployment of Strimzi.
Send and receive messages from a Kafka cluster installed on Kubernetes.

This procedure describes how to deploy Kafka clients to the Kubernetes cluster, then produce and consume messages to test your installation.
The clients are deployed using the Kafka container image.

.Prerequisites

Expand All @@ -18,16 +20,24 @@ You can use these clients to verify a deployment of Strimzi.

. Deploy a Kafka producer.
+
[source,shell,subs="+quotes,attributes+"]
kubectl run kafka-producer -ti --image={DockerKafka} --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server _cluster-name_-kafka-bootstrap:9092 --topic _my-topic_
This example deploys a Kafka producer that connects to the Kafka cluster `my-cluster`.
+
A topic named `my-topic` is created.
+
.Deploying a Kafka producer to Kubernetes
[source,shell,subs="+attributes"]
kubectl run kafka-producer -ti --image={DockerKafka} --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic

. Type a message into the console where the producer is running.

. Press _Enter_ to send the message.
. Press *Enter* to send the message.

. Deploy a Kafka consumer.
+
[source,shell,subs="+quotes,attributes+"]
kubectl run kafka-consumer -ti --image={DockerKafka} --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server _cluster-name_-kafka-bootstrap:9092 --topic _my-topic_ --from-beginning
The consumer should consume messages produced to `my-topic` in the Kafka cluster `my-cluster`.
+
.Deploying a Kafka consumer to Kubernetes
[source,shell,subs="+attributes"]
kubectl run kafka-consumer -ti --image={DockerKafka} --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning

. Confirm that you see the incoming messages in the consumer console.
Loading
Loading