Skip to content
This repository has been archived by the owner on Sep 18, 2023. It is now read-only.

Commit

Permalink
Readme update
Browse files Browse the repository at this point in the history
  • Loading branch information
mirostary committed May 5, 2021
1 parent 7f8c704 commit 249084f
Show file tree
Hide file tree
Showing 2 changed files with 31 additions and 15 deletions.
32 changes: 21 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -528,27 +528,37 @@ Container images used in the tests are:

### `infinispan-client`

Verifies the way of sharing cache with Datagrid operator.
Verifies the way of the sharing cache by Datagrid operator and Infinispan cluster and data consistency after failures.

#### Prerequisities
#### Prerequisites
- Datagrid operator installed in `datagrid-operator` namespace. This needs cluster-admin rights to install.
- The operator supports only single-namespace so it has to watch another well-known namespace `datagrid-cluster`.
This namespace must be created by "qe" user or this user must have access to it because infinispan tests are connecting to it.
This namespace must be created by "qe" user or this user must have access to it because tests are connecting to it.
- These namespaces should be prepared after the Openshift installation - See [Installing Data Grid Operator](https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html/running_data_grid_on_openshift/installation)

Tests are creating an infinispan cluster in the `datagrid-cluster` namespace. Cluster is created before tests by `infinispan_cluster_config.yaml`.
To allow parallel runs of tests this cluster must be renamed for every test run - along with configmap `infinispan-config`. The configmap contains
The test suite contains a Maven profile activated using the `include.datagrid`.
To execute Datagrid tests use the following switch in the maven command:

```
-Dinclude.datagrid
```

Tests create an Infinispan cluster in the `datagrid-cluster` namespace. Cluster is created before tests by `infinispan_cluster_config.yaml`.
To allow parallel runs of tests this cluster is renamed for every test run - along with configmap `infinispan-config`. The configmap contains
configuration property `quarkus.infinispan-client.server-list`. Value of this property is a path to the infinispan cluster from test namespace,
its structure is `infinispan-cluster-name.datagrid-cluster-namespace.svc.cluster.local:11222`. It is because testsuite using dynamically generated
namespaces for tests. So this path is needed for tests to find infinispan server.
its structure is `infinispan-cluster-name.datagrid-cluster-namespace.svc.cluster.local:11222`. It is because the testsuite uses dynamically generated
namespaces for tests. So this path is needed for the tests to find Infinispan cluster in another namespace.

The infinispan cluster needs 2 special secrets - tls-secret with TLS certificate and connect-secret with the credentials.
The Infinispan cluster needs 2 special secrets - tls-secret with TLS certificate and connect-secret with the credentials.
TLS certificate is a substitution of `secrets/signing-key` in openshift-service-ca namespace, which "qe" user cannot use (doesn't have rights on it).
Clientcert secret is generated for "qe" from the tls-secret mentioned above.

Infinispan client test are using the cache directly with @Inject and @RemoteCache. Through the JAX-RS endpoint, we send data into the cache and retrieve it through another JAX-RS endpoint.
The next tests are checking a simple fail-over - first client (application) fail, then Infinispan cluster (cache) fail. Tests kill either the Quarkus pod or Infinispan cluster pod, then wait for redeployment, and check data.
For the Quarkus application pod killing is used the same approach as in configmap tests.
Infinispan client tests use the cache directly with `@Inject` and `@RemoteCache`. Through the JAX-RS endpoint, we send data into the cache and retrieve it through another JAX-RS endpoint.
The next tests are checking a simple fail-over - first client (application) fail, then Infinispan cluster (cache) fail. Tests kill first the Quarkus pod then Infinispan cluster pod and then check data.
For the Quarkus application, pod killing is used the same approach as in configmap tests. For the Infinispan cluster, pod killing is updated its YAML snipped and uploaded with zero replicas.
By default, when the Infinispan server is down and the application can't open a connection, it tries to connect again, up to 10 times (max_retries) and gives up after 60s (connect_timeout).
Because of that we are using the `hotrod-client.properties` file where are the max_retries and connect_timeout reduced. Without this the application will be still trying to connect to the Infinispan server next 10 minutes and the incremented number can appear later.
The last three tests are for testing of the multiple client access to the cache. We simulate the second client by deploying the second deployment config, Service, and Route for these tests. These are copied from the `openshift.yml` file.

### `security/basic`

Expand Down
14 changes: 10 additions & 4 deletions infinispan-client/src/main/resources/application.properties
Original file line number Diff line number Diff line change
@@ -1,25 +1,31 @@
quarkus.application.name=infinispan-client

# Auth info
quarkus.infinispan-client.auth-realm=default
quarkus.infinispan-client.auth-username=qe
quarkus.infinispan-client.auth-password=qe
quarkus.infinispan-client.sasl-mechanism=PLAIN
quarkus.infinispan-client.client-intelligence=BASIC

# TODO: remove workaround for https://github.com/quarkusio/quarkus/issues/14525
quarkus.openshift.env.vars.smallrye-config-locations=/deployments/config

# Where the app can read the trust store from when it runs
quarkus.infinispan-client.trust-store=/mnt/app-secret/clientcerts
quarkus.infinispan-client.trust-store=/mnt/clientcerts

# trust store password
quarkus.infinispan-client.trust-store-password=password

# trust store type
quarkus.infinispan-client.trust-store-type=JKS

# which secret to mount, and where to mount it
quarkus.openshift.mounts.my-volume.path=/mnt
quarkus.openshift.secret-volumes.my-volume.secret-name=clientcerts

# instructs quarkus to build and deploy to kubernetes/openshift, and to trust the Kubernetes API since we're using self-signed
quarkus.openshift.expose=true
quarkus.s2i.base-jvm-image=registry.access.redhat.com/ubi8/openjdk-11:latest

# configmap settings
quarkus.openshift.app-secret=clientcerts
quarkus.openshift.app-config-map=infinispan-config
quarkus.openshift.config-map-volumes.app-config.config-map-name=infinispan-config
quarkus.openshift.mounts.app-config.path=/deployments/config

0 comments on commit 249084f

Please sign in to comment.