Skip to content
This repository has been archived by the owner on Sep 18, 2023. It is now read-only.

Commit

Permalink
Update of infinispan-client module and adding tests for multiple clie…
Browse files Browse the repository at this point in the history
…nt access
  • Loading branch information
mirostary committed May 5, 2021
1 parent 7bd3cb1 commit c1ffe66
Show file tree
Hide file tree
Showing 7 changed files with 496 additions and 218 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ jobs:
with:
java-version: openjdk${{ matrix.java }}
- name: Build with Maven
run: mvn -fae -V -B clean test -Dinclude.serverless
run: mvn -fae -V -B clean test -Dinclude.serverless -Dinclude.datagrid
- name: Zip Artifacts
run: |
zip -R artifacts-jvm${{ matrix.java }}.zip 'surefire-reports/*'
Expand Down
32 changes: 21 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -528,27 +528,37 @@ Container images used in the tests are:

### `infinispan-client`

Verifies the way of sharing cache with Datagrid operator.
Verifies the way of the sharing cache by Datagrid operator and Infinispan cluster and data consistency after failures.

#### Prerequisities
#### Prerequisites
- Datagrid operator installed in `datagrid-operator` namespace. This needs cluster-admin rights to install.
- The operator supports only single-namespace so it has to watch another well-known namespace `datagrid-cluster`.
This namespace must be created by "qe" user or this user must have access to it because infinispan tests are connecting to it.
This namespace must be created by "qe" user or this user must have access to it because tests are connecting to it.
- These namespaces should be prepared after the Openshift installation - See [Installing Data Grid Operator](https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html/running_data_grid_on_openshift/installation)

Tests are creating an infinispan cluster in the `datagrid-cluster` namespace. Cluster is created before tests by `infinispan_cluster_config.yaml`.
To allow parallel runs of tests this cluster must be renamed for every test run - along with configmap `infinispan-config`. The configmap contains
The test suite contains a Maven profile activated using the `include.datagrid`.
To execute Datagrid tests use the following switch in the maven command:

```
-Dinclude.datagrid
```

Tests create an Infinispan cluster in the `datagrid-cluster` namespace. Cluster is created before tests by `infinispan_cluster_config.yaml`.
To allow parallel runs of tests this cluster is renamed for every test run - along with configmap `infinispan-config`. The configmap contains
configuration property `quarkus.infinispan-client.server-list`. Value of this property is a path to the infinispan cluster from test namespace,
its structure is `infinispan-cluster-name.datagrid-cluster-namespace.svc.cluster.local:11222`. It is because testsuite using dynamically generated
namespaces for tests. So this path is needed for tests to find infinispan server.
its structure is `infinispan-cluster-name.datagrid-cluster-namespace.svc.cluster.local:11222`. It is because the testsuite uses dynamically generated
namespaces for tests. So this path is needed for the tests to find Infinispan cluster in another namespace.

The infinispan cluster needs 2 special secrets - tls-secret with TLS certificate and connect-secret with the credentials.
The Infinispan cluster needs 2 special secrets - tls-secret with TLS certificate and connect-secret with the credentials.
TLS certificate is a substitution of `secrets/signing-key` in openshift-service-ca namespace, which "qe" user cannot use (doesn't have rights on it).
Clientcert secret is generated for "qe" from the tls-secret mentioned above.

Infinispan client test are using the cache directly with @Inject and @RemoteCache. Through the JAX-RS endpoint, we send data into the cache and retrieve it through another JAX-RS endpoint.
The next tests are checking a simple fail-over - first client (application) fail, then Infinispan cluster (cache) fail. Tests kill either the Quarkus pod or Infinispan cluster pod, then wait for redeployment, and check data.
For the Quarkus application pod killing is used the same approach as in configmap tests.
Infinispan client tests use the cache directly with `@Inject` and `@RemoteCache`. Through the JAX-RS endpoint, we send data into the cache and retrieve it through another JAX-RS endpoint.
The next tests are checking a simple fail-over - first client (application) fail, then Infinispan cluster (cache) fail. Tests kill first the Quarkus pod then Infinispan cluster pod and then check data.
For the Quarkus application, pod killing is used the same approach as in configmap tests. For the Infinispan cluster, pod killing is updated its YAML snipped and uploaded with zero replicas.
By default, when the Infinispan server is down and the application can't open a connection, it tries to connect again, up to 10 times (max_retries) and gives up after 60s (connect_timeout).
Because of that we are using the `hotrod-client.properties` file where are the max_retries and connect_timeout reduced. Without this the application will be still trying to connect to the Infinispan server next 10 minutes and the incremented number can appear later.
The last three tests are for testing of the multiple client access to the cache. We simulate the second client by deploying the second deployment config, Service, and Route for these tests. These are copied from the `openshift.yml` file.

### `security/basic`

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,17 +29,18 @@ public Integer getCacheCounter() {
@Path("/get-client")
@GET
@Produces(MediaType.TEXT_PLAIN)
public int getClientCounter() {
public Integer getClientCounter() {
return counter.get();
}

@Path("/increment-counters")
@PUT
@Produces(MediaType.TEXT_PLAIN)
public String incCounters() {
int invocationNumber = counter.incrementAndGet();
cache.put("counter", cache.get("counter") + 1);
return "Cache=" + cache.get("counter") + " Client=" + invocationNumber;
int invocationClientNumber = counter.incrementAndGet();
int invocationCacheNumber = cache.get("counter") + 1;
cache.put("counter", invocationCacheNumber);
return "Cache=" + invocationCacheNumber + " Client=" + invocationClientNumber;
}

@Path("/reset-cache")
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
infinispan.client.hotrod.connect_timeout=1000
infinispan.client.hotrod.max_retries=0
Original file line number Diff line number Diff line change
@@ -0,0 +1,268 @@
package io.quarkus.ts.openshift.infinispan.client;

import io.fabric8.kubernetes.api.model.HasMetadata;
import io.fabric8.kubernetes.api.model.KubernetesList;
import io.fabric8.kubernetes.api.model.Service;
import io.fabric8.kubernetes.client.utils.Serialization;
import io.fabric8.openshift.api.model.DeploymentConfig;
import io.fabric8.openshift.api.model.Route;
import io.fabric8.openshift.client.OpenShiftClient;
import io.quarkus.ts.openshift.app.metadata.AppMetadata;
import io.quarkus.ts.openshift.common.Command;
import io.quarkus.ts.openshift.common.CustomizeApplicationDeployment;
import io.quarkus.ts.openshift.common.injection.TestResource;
import io.quarkus.ts.openshift.common.util.AwaitUtil;
import io.quarkus.ts.openshift.common.util.OpenShiftUtil;
import org.junit.jupiter.api.AfterAll;

import java.io.File;
import java.io.IOException;
import java.net.URL;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.concurrent.TimeUnit;

import static io.restassured.RestAssured.when;
import static org.awaitility.Awaitility.await;
import static org.hamcrest.CoreMatchers.is;

public abstract class AbstractInfinispanResourceTest {
protected static final String ORIGIN_CLUSTER_NAME = "totally-random-infinispan-cluster-name";
protected static final String CLUSTER_CONFIG_PATH = "target/test-classes/infinispan_cluster_config.yaml";
protected static final String CLUSTER_CONFIGMAP_PATH = "target/test-classes/infinispan_cluster_configmap.yaml";
protected static final String CONNECT_SECRET = "target/test-classes/connect_secret.yaml";
protected static final String TLS_SECRET = "target/test-classes/tls_secret.yaml";

protected static final String CLUSTER_NAMESPACE_NAME = "datagrid-cluster";
protected static final String SECOND_CLIENT_APPLICATION_NAME = "another-infinispan-client";
protected static final String SECOND_CLIENT_DEPLOYMENT_CONFIG = "target/test-classes/deployment_config_second_client.yaml";
protected static String NEW_CLUSTER_NAME = null;

@TestResource
protected AppMetadata metadata;

@TestResource
protected OpenShiftUtil openshift;

@TestResource
protected AwaitUtil await;

@TestResource
protected URL appUrl;

/**
* Application deployment is performed by the Quarkus Kubernetes extension during test execution.
* Creating an infinispan cluster, its secrets, setting the path to it for the application, and deploying the second app.
*
* @param oc
* @param metadata
* @throws IOException
* @throws InterruptedException
*/
@CustomizeApplicationDeployment
public static void deploy(OpenShiftClient oc, AppMetadata metadata) throws IOException, InterruptedException {
new Command("oc", "apply", "-f", CONNECT_SECRET).runAndWait();
new Command("oc", "apply", "-f", TLS_SECRET).runAndWait();

// there should be unique name for every created infinispan cluster to be able parallel runs
NEW_CLUSTER_NAME = oc.getNamespace() + "-infinispan-cluster";

// rename infinispan cluster and configmap
adjustYml(CLUSTER_CONFIG_PATH, ORIGIN_CLUSTER_NAME, NEW_CLUSTER_NAME);
adjustYml(CLUSTER_CONFIGMAP_PATH, ORIGIN_CLUSTER_NAME, NEW_CLUSTER_NAME);

new Command("oc", "apply", "-f", CLUSTER_CONFIGMAP_PATH).runAndWait();
new Command("oc", "apply", "-f", CLUSTER_CONFIG_PATH).runAndWait();

new Command("oc", "-n", CLUSTER_NAMESPACE_NAME, "wait", "--for", "condition=wellFormed", "--timeout=300s", "infinispan/" + NEW_CLUSTER_NAME).runAndWait();

deploySecondInfinispanClient(oc, metadata);
}

// Undeployment of the second application and infinispan cluster
@AfterAll
public static void undeploy() throws IOException, InterruptedException {
new Command("oc", "delete", "-f", SECOND_CLIENT_DEPLOYMENT_CONFIG).runAndWait();
new Command("oc", "delete", "-f", CLUSTER_CONFIGMAP_PATH).runAndWait();
new Command("oc", "delete", "-f", CLUSTER_CONFIG_PATH).runAndWait();
}

/**
* This method copy the 'openshift.yml' file, changes its name, labels, etc., and deploys it as a second application in OCP.
* For that we need only DeploymentConfig, Service, and Route.
*
* @param oc
* @param metadata
* @throws IOException
* @throws InterruptedException
*/
public static void deploySecondInfinispanClient(OpenShiftClient oc, AppMetadata metadata) throws IOException, InterruptedException {
List<HasMetadata> objs = oc.load(Files.newInputStream(Paths.get("target/kubernetes/openshift.yml"))).get();
List<HasMetadata> necessary_objects = new ArrayList<>();

HashMap<String, String> change = new HashMap<>();
change.put("app.kubernetes.io/name", SECOND_CLIENT_APPLICATION_NAME);

for (HasMetadata obj : objs) {
if (obj.getMetadata().getName().equals(metadata.appName)) {
if (obj instanceof DeploymentConfig) {
DeploymentConfig dc = (DeploymentConfig) obj;
dc.getMetadata().setName(SECOND_CLIENT_APPLICATION_NAME);
dc.getMetadata().setLabels(change);
dc.getSpec().setSelector(change);
dc.getSpec().getTemplate().getMetadata().setLabels(change);
necessary_objects.add(dc);
}

if (obj instanceof Service) {
Service service = (Service) obj;
service.getMetadata().setName(SECOND_CLIENT_APPLICATION_NAME);
service.getSpec().setSelector(change);
necessary_objects.add(service);
}

if (obj instanceof Route) {
Route route = (Route) obj;
route.getMetadata().setName(SECOND_CLIENT_APPLICATION_NAME);
route.getSpec().getTo().setName(SECOND_CLIENT_APPLICATION_NAME);
route.getSpec().setHost("");
route.getSpec().setPath("");
necessary_objects.add(route);
}
}
}

KubernetesList list = new KubernetesList();
list.setItems(necessary_objects);
Serialization.yamlMapper().writeValue(Files.newOutputStream(Paths.get(new File(SECOND_CLIENT_DEPLOYMENT_CONFIG).getPath())), list);

new Command("oc", "apply", "-f", SECOND_CLIENT_DEPLOYMENT_CONFIG).runAndWait();
}

/**
* Setting the cache counter value to 0 from provided client url address.
* At the end, the cache value is tested that it is actually 0.
*
* @param url
*/
public void resetCacheCounter(String url) {
await().atMost(5, TimeUnit.MINUTES).untilAsserted(() -> {
when()
.put(url)
.then()
.body(is("Cache=0"));
});
}

/**
* Setting the client atomic integer counter to 0 in the provided client url address.
* At the end, the client counter value is tested that it is actually 0.
*
* @param url
*/
public void resetClientCounter(String url) {
await().atMost(5, TimeUnit.MINUTES).untilAsserted(() -> {
when()
.put(url)
.then()
.body(is("Client=0"));
});
}

/**
* Getting the value of either cache or client counters from the provided url address.
* Tested is only the right returned status code.
*
* @param url
* @return endpoint value as String
*/
public String getCounterValue(String url) {
String actualResponse =
when()
.get(url)
.then().statusCode(200)
.extract().asString();

return actualResponse;
}

/**
* Increasing cache and client counters by 1 from the provided url address.
*
* @param url
* @return increased endpoint value as String
*/
public String fillTheCache(String url) {
String actualResponse =
when()
.put(url)
.then().statusCode(200)
.extract().asString();

return actualResponse;
}

/**
* Increasing cache and client counters by the provided count value from the provided url address.
*
* @param url
* @param count
*/
public void incrementCountersOnValue(String url, int count) {
for (int i = 1; i <= count; i++) {
when()
.put(url)
.then()
.statusCode(200);
}
}

/**
* Reduces the number of infinispan cluster replicas to 0 and wait for the shutdown condition. It is done by changing
* the YAML file in the target/test-classes directory.
*
* @throws IOException
* @throws InterruptedException
*/
public void killInfinispanCluster() throws IOException, InterruptedException {
adjustYml(CLUSTER_CONFIG_PATH, "replicas: 1", "replicas: 0");
new Command("oc", "apply", "-f", CLUSTER_CONFIG_PATH).runAndWait();
new Command("oc", "-n", CLUSTER_NAMESPACE_NAME, "wait", "--for", "condition=gracefulShutdown", "--timeout=300s", "infinispan/" + NEW_CLUSTER_NAME).runAndWait();
}

/**
* The number of replicas is increased back to value 1 the same way as in "killInfinispanCluster()" method. The wait command
* expects "wellFormed" condition in Infinispan cluster status.
*
* @throws IOException
* @throws InterruptedException
*/
public void restartInfinispanCluster() throws IOException, InterruptedException {
adjustYml(CLUSTER_CONFIG_PATH, "replicas: 0", "replicas: 1");
new Command("oc", "apply", "-f", CLUSTER_CONFIG_PATH).runAndWait();
new Command("oc", "-n", CLUSTER_NAMESPACE_NAME, "wait", "--for", "condition=wellFormed", "--timeout=360s", "infinispan/" + NEW_CLUSTER_NAME).runAndWait();
}

/**
* Replacing values in the provided YAML file
*
* @param path
* @param originString
* @param newString
* @throws IOException
*/
public static void adjustYml(String path, String originString, String newString) throws IOException {
Path yamlPath = Paths.get(path);
Charset charset = StandardCharsets.UTF_8;

String yamlContent = new String(Files.readAllBytes(yamlPath), charset);
yamlContent = yamlContent.replace(originString, newString);
Files.write(yamlPath, yamlContent.getBytes(charset));
}
}
Loading

0 comments on commit c1ffe66

Please sign in to comment.