Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(refactor)network-chaos: upgrade pumba specs to use iproute2 containers for network chaos #991

Merged
merged 4 commits into from
Dec 12, 2019
Merged

Conversation

ksatchit
Copy link
Member

@ksatchit ksatchit commented Dec 10, 2019

Signed-off-by: ksatchit [email protected]

What this PR does / why we need it:

  • It was observed that many base docker images are shipped without tc (linux traffic shaper utility) that is a necessary requirement to use pumba to inject network delays on the container)

  • Pumba offers an alpine-based image with iproute2 (source package for tc) which it (the main pumba container/pod) can run on the host with NET_ADMIN capabilities to use the target container network stack and induce netem-based chaos (delays/loss/corruption/jitters).

  • With the current version of pumba used by litmus (0.4.8), the iproute2 image is expected to be available on the hosts and not pulled at runtime. Also, there are issues pertaining to unclean removal/cleanup of these containers post chaos. Using in this mode would cause us to run docker commands before and after chaos..which is undesirable. This issue is fixed with 0.6+ versions of pumba.

  • However, pumba 0.6+ is based on scratch base images without availability of a shell, docker-entrypoint etc., The chaos params needed to be burned into the kubernetes spec/artifact prior to execution. In this case, we shall not be able to use existing workflows based on kubectl exec operations for chaos after starting pumba in --dry-run mode.

  • This PR modifies the network chaos lib in order to identify the node on which the target container resides post which a pumba (0.6.5) job is constructed and run for a period of TOTAL_CHAOS_DURATION with the nodeSelector set to the derived node, thereby inducing delay/loss for specified period after which the job ends.

Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes # partially fulfills #969 (container-kill experiments continue to use pumba 0.4.8)

Checklist

  • Does this PR have a corresponding GitHub issue?
  • Have you included relevant README for the chaoslib/experiment with details?
  • Have you added debug messages where necessary?
  • Have you added task comments where necessary?
  • Have you tested the changes for possible failure conditions?
  • Have you provided the positive & negative test logs for the litmusbook execution?
  • Does the litmusbook ensure idempotency of cluster state?, i.e., is cluster restored to original state?
  • Have you used non-shell/command modules for Kubernetes tasks?
  • Have you (jinja) templatized custom scripts that is run by the litmusbook, if any?
  • Have you (jinja) templatized Kubernetes deployment manifests used by the litmusbook, if any?
  • Have you reused/created util functions instead of repeating tasks in the litmusbook?
  • Do the artifacts follow the appropriate directory structure?
  • Have you isolated storage (eg: OpenEBS) specific implementations, checks?
  • Have you isolated platform (eg: baremetal kubeadm/openshift/aws/gcloud) specific implementations, checks?
  • Are the ansible facts well defined? Is the scope explicitly set for playbook & included utils?
  • Have you ensured minimum/careful usage of shell utilities (awk, grep, sed, cut, xargs etc.,)?
  • Can the limtusbook be executed both from within & outside a container (configurable paths, no hardcode)?
  • Can you suggest the minimal resource requirements for the litmusbook execution?
  • Does the litmusbook job artifact carry comments/default options/range for the ENV tunables?
  • Has the litmusbooks been linted?

Special notes for your reviewer:

The PR introduces a few other changes that can be reused in other experiments.

  • Jinja templates for dependent deploys are constructed with inbuilt conditionals
  • Dependent deploys in an experiment are created with a random instance to facilitate parallel run/cleanup.

Note:

  • Changes were tested against a busybox container previously failing/not affected by chaos.
  • The network experiments still do not have chaos verification steps. These will be added in subsequent PRs
  • The ansible docs have been referenced to use is not defined for variable negative logic over not var which is seen to fail with current ansible version in the litmuschaos ansible-runner.

@ksatchit
Copy link
Member Author

ksatchit commented Dec 10, 2019

ansible-playbook 2.7.3
  config file = None
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
  executable location = /usr/local/bin/ansible-playbook
  python version = 2.7.15+ (default, Oct  7 2019, 17:39:04) [GCC 7.4.0]
No config file found; using defaults
/etc/ansible/hosts did not meet host_list requirements, check plugin documentation if this is unexpected
/etc/ansible/hosts did not meet script requirements, check plugin documentation if this is unexpected
statically imported: /experiments/generic/pod_network_loss/pod_network_loss_ansible_prerequisites.yml

PLAYBOOK: pod_network_loss_ansible_logic.yml ***********************************
1 plays in ./experiments/generic/pod_network_loss/pod_network_loss_ansible_logic.yml

PLAY [localhost] ***************************************************************
2019-12-11T10:15:52.655589 (delta: 0.093573)         elapsed: 0.093573 ******** 
=============================================================================== 

TASK [Gathering Facts] *********************************************************
task path: /experiments/generic/pod_network_loss/pod_network_loss_ansible_logic.yml:2
2019-12-11T10:15:52.689524 (delta: 0.033889)         elapsed: 0.127508 ******** 
ok: [127.0.0.1]
META: ran handlers

TASK [Identify the chaos util to be invoked] ***********************************
task path: /experiments/generic/pod_network_loss/pod_network_loss_ansible_prerequisites.yml:1
2019-12-11T10:15:59.522073 (delta: 6.832491)         elapsed: 6.960057 ******** 
changed: [127.0.0.1] => {"changed": true, "checksum": "204b3153742f82e7fb32f45eb3dc4c243d285268", "dest": "./chaosutil.yml", "gid": 0, "group": "root", "md5sum": "52d94379e07980ee12f9a04773a75a04", "mode": "0644", "owner": "root", "size": 59, "src": "/root/.ansible/tmp/ansible-tmp-1576059359.56-10005775119167/source", "state": "file", "uid": 0}

TASK [include_vars] ************************************************************
task path: /experiments/generic/pod_network_loss/pod_network_loss_ansible_logic.yml:23
2019-12-11T10:16:00.172008 (delta: 0.649896)         elapsed: 7.609992 ******** 
ok: [127.0.0.1] => {"ansible_facts": {"c_util": "/chaoslib/pumba/network_chaos/network_chaos.yml"}, "ansible_included_var_files": ["/experiments/generic/pod_network_loss/chaosutil.yml"], "changed": false}

TASK [Construct chaos result name (experiment_name)] ***************************
task path: /experiments/generic/pod_network_loss/pod_network_loss_ansible_logic.yml:28
2019-12-11T10:16:00.301673 (delta: 0.129609)         elapsed: 7.739657 ******** 
ok: [127.0.0.1] => {"ansible_facts": {"c_experiment": "engine-pod-network-loss"}, "changed": false}

TASK [include_tasks] ***********************************************************
task path: /experiments/generic/pod_network_loss/pod_network_loss_ansible_logic.yml:34
2019-12-11T10:16:00.466204 (delta: 0.164481)         elapsed: 7.904188 ******** 
included: /utils/runtime/update_chaos_result_resource.yml for 127.0.0.1

TASK [Generate the chaos result CR to reflect SOT (Start of Test)] *************
task path: /utils/runtime/update_chaos_result_resource.yml:3
2019-12-11T10:16:00.601527 (delta: 0.135252)         elapsed: 8.039511 ******** 
changed: [127.0.0.1] => {"changed": true, "checksum": "415481d1f5803fbc42e27a5135f4055aa8045039", "dest": "./chaos-result.yaml", "gid": 0, "group": "root", "md5sum": "9c4c60ad5fffbd3f9fab63ad689f61e2", "mode": "0644", "owner": "root", "size": 312, "src": "/root/.ansible/tmp/ansible-tmp-1576059360.69-232906942191732/source", "state": "file", "uid": 0}

TASK [Apply the chaos result CR] ***********************************************
task path: /utils/runtime/update_chaos_result_resource.yml:13
2019-12-11T10:16:01.346375 (delta: 0.744781)         elapsed: 8.784359 ******** 
changed: [127.0.0.1] => {"changed": true, "cmd": "kubectl apply -f chaos-result.yaml -n default", "delta": "0:00:01.224398", "end": "2019-12-11 10:16:03.069639", "failed_when_result": false, "rc": 0, "start": "2019-12-11 10:16:01.845241", "stderr": "", "stderr_lines": [], "stdout": "chaosresult.litmuschaos.io/engine-pod-network-loss configured", "stdout_lines": ["chaosresult.litmuschaos.io/engine-pod-network-loss configured"]}

TASK [Update the chaos result CR to reflect EOT (End of Test)] *****************
task path: /utils/runtime/update_chaos_result_resource.yml:23
2019-12-11T10:16:03.121962 (delta: 1.775515)         elapsed: 10.559946 ******* 
skipping: [127.0.0.1] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [Apply the chaos result CR] ***********************************************
task path: /utils/runtime/update_chaos_result_resource.yml:33
2019-12-11T10:16:03.165736 (delta: 0.043713)         elapsed: 10.60372 ******** 
skipping: [127.0.0.1] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [Verify that the AUT (Application Under Test) is running] *****************
task path: /experiments/generic/pod_network_loss/pod_network_loss_ansible_logic.yml:40
2019-12-11T10:16:03.209702 (delta: 0.043903)         elapsed: 10.647686 ******* 
included: /utils/common/status_app_pod.yml for 127.0.0.1

TASK [Checking whether application pods are in running state] ******************
task path: /utils/common/status_app_pod.yml:2
2019-12-11T10:16:03.269886 (delta: 0.060126)         elapsed: 10.70787 ******** 
changed: [127.0.0.1] => {"attempts": 1, "changed": true, "cmd": "kubectl get pods -n default -l run=busybox -o custom-columns=:.status.phase --no-headers", "delta": "0:00:01.101852", "end": "2019-12-11 10:16:04.548926", "rc": 0, "start": "2019-12-11 10:16:03.447074", "stderr": "", "stderr_lines": [], "stdout": "Running", "stdout_lines": ["Running"]}

TASK [Get the container status of application.] ********************************
task path: /utils/common/status_app_pod.yml:9
2019-12-11T10:16:04.659460 (delta: 1.389509)         elapsed: 12.097444 ******* 
changed: [127.0.0.1] => {"attempts": 1, "changed": true, "cmd": "kubectl get pod -n default -l run=busybox --no-headers -o jsonpath='{.items[*].status.containerStatuses[*].ready}' | tr ' ' '\\n' | uniq", "delta": "0:00:00.925453", "end": "2019-12-11 10:16:05.949373", "rc": 0, "start": "2019-12-11 10:16:05.023920", "stderr": "", "stderr_lines": [], "stdout": "true", "stdout_lines": ["true"]}

TASK [include_tasks] ***********************************************************
task path: /experiments/generic/pod_network_loss/pod_network_loss_ansible_logic.yml:47
2019-12-11T10:16:06.000847 (delta: 1.341325)         elapsed: 13.438831 ******* 
included: /chaoslib/pumba/network_chaos/network_chaos.yml for 127.0.0.1

TASK [Select the app pod] ******************************************************
task path: /chaoslib/pumba/network_chaos/network_chaos.yml:6
2019-12-11T10:16:06.083542 (delta: 0.08263)         elapsed: 13.521526 ******** 
changed: [127.0.0.1] => {"changed": true, "cmd": "kubectl get pod -l run=busybox -n default -o=custom-columns=:metadata.name --no-headers | shuf | head -1", "delta": "0:00:00.728143", "end": "2019-12-11 10:16:06.985890", "rc": 0, "start": "2019-12-11 10:16:06.257747", "stderr": "", "stderr_lines": [], "stdout": "busybox-665f7dcd4-t2gxj", "stdout_lines": ["busybox-665f7dcd4-t2gxj"]}

TASK [Record app pod name] *****************************************************
task path: /chaoslib/pumba/network_chaos/network_chaos.yml:15
2019-12-11T10:16:07.030123 (delta: 0.946522)         elapsed: 14.468107 ******* 
ok: [127.0.0.1] => {"ansible_facts": {"app_pod": "busybox-665f7dcd4-t2gxj"}, "changed": false}

TASK [Identify the application node] *******************************************
task path: /chaoslib/pumba/network_chaos/network_chaos.yml:24
2019-12-11T10:16:07.096957 (delta: 0.066772)         elapsed: 14.534941 ******* 
changed: [127.0.0.1] => {"changed": true, "cmd": "kubectl get pod busybox-665f7dcd4-t2gxj -n default --no-headers -o custom-columns=:spec.nodeName", "delta": "0:00:01.174972", "end": "2019-12-11 10:16:08.437881", "rc": 0, "start": "2019-12-11 10:16:07.262909", "stderr": "", "stderr_lines": [], "stdout": "gke-playground-default-pool-37e10f0d-3vqb", "stdout_lines": ["gke-playground-default-pool-37e10f0d-3vqb"]}

TASK [set_fact] ****************************************************************
task path: /chaoslib/pumba/network_chaos/network_chaos.yml:32
2019-12-11T10:16:08.534745 (delta: 1.437727)         elapsed: 15.972729 ******* 
ok: [127.0.0.1] => {"ansible_facts": {"app_node": "gke-playground-default-pool-37e10f0d-3vqb"}, "changed": false}

TASK [Generate a run id if not passed from the engine/experiment] **************
task path: /chaoslib/pumba/network_chaos/network_chaos.yml:36
2019-12-11T10:16:08.680465 (delta: 0.145653)         elapsed: 16.118449 ******* 
changed: [127.0.0.1] => {"changed": true, "cmd": "echo $(mktemp) | cut -d '.' -f 2", "delta": "0:00:00.661385", "end": "2019-12-11 10:16:09.714749", "rc": 0, "start": "2019-12-11 10:16:09.053364", "stderr": "", "stderr_lines": [], "stdout": "FEE6kcDpx5", "stdout_lines": ["FEE6kcDpx5"]}

TASK [set_fact] ****************************************************************
task path: /chaoslib/pumba/network_chaos/network_chaos.yml:40
2019-12-11T10:16:09.761768 (delta: 1.081239)         elapsed: 17.199752 ******* 
ok: [127.0.0.1] => {"ansible_facts": {"run_id": "fee6kcdpx5"}, "changed": false}

TASK [Patch the chaoslib image] ************************************************
task path: /chaoslib/pumba/network_chaos/network_chaos.yml:44
2019-12-11T10:16:09.836081 (delta: 0.07425)         elapsed: 17.274065 ******** 
changed: [127.0.0.1] => {"changed": true, "checksum": "5e11288110124953d3adb7c17c0022ce528e6d88", "dest": "/chaoslib/pumba/network_chaos/pumba_kube.yml", "gid": 0, "group": "root", "md5sum": "f604caefc2d00b807445f0862d2c18da", "mode": "0644", "owner": "root", "size": 1038, "src": "/root/.ansible/tmp/ansible-tmp-1576059369.89-114260472057278/source", "state": "file", "uid": 0}

TASK [Setup pumba chaos infrastructure] ****************************************
task path: /chaoslib/pumba/network_chaos/network_chaos.yml:51
2019-12-11T10:16:10.189359 (delta: 0.353195)         elapsed: 17.627343 ******* 
changed: [127.0.0.1] => {"changed": true, "cmd": "kubectl create -f /chaoslib/pumba/network_chaos/pumba_kube.yml -n default", "delta": "0:00:01.017742", "end": "2019-12-11 10:16:11.369050", "rc": 0, "start": "2019-12-11 10:16:10.351308", "stderr": "", "stderr_lines": [], "stdout": "job.batch/pumba-netem-fee6kcdpx5 created", "stdout_lines": ["job.batch/pumba-netem-fee6kcdpx5 created"]}

TASK [Wait until the pumba netem job is completed] *****************************
task path: /chaoslib/pumba/network_chaos/network_chaos.yml:58
2019-12-11T10:16:11.483422 (delta: 1.294014)         elapsed: 18.921406 ******* 
FAILED - RETRYING: Wait until the pumba netem job is completed (120 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (119 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (118 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (117 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (116 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (115 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (114 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (113 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (112 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (111 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (110 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (109 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (108 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (107 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (106 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (105 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (104 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (103 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (102 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (101 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (100 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (99 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (98 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (97 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (96 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (95 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (94 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (93 retries left).
FAILED - RETRYING: Wait until the pumba netem job is completed (92 retries left).
changed: [127.0.0.1] => {"attempts": 30, "changed": true, "cmd": "kubectl get pods -l job-name=pumba-netem-fee6kcdpx5 --no-headers -n default --no-headers -o custom-columns=:status.phase", "delta": "0:00:01.466204", "end": "2019-12-11 10:17:16.434474", "rc": 0, "start": "2019-12-11 10:17:14.968270", "stderr": "", "stderr_lines": [], "stdout": "Succeeded", "stdout_lines": ["Succeeded"]}

TASK [Tear down pumba infrastructure] ******************************************
task path: /chaoslib/pumba/network_chaos/network_chaos.yml:71
2019-12-11T10:17:16.531000 (delta: 65.04751)         elapsed: 83.968984 ******* 
changed: [127.0.0.1] => {"changed": true, "cmd": "kubectl delete -f /chaoslib/pumba/network_chaos/pumba_kube.yml -n default", "delta": "0:00:00.720078", "end": "2019-12-11 10:17:17.522879", "rc": 0, "start": "2019-12-11 10:17:16.802801", "stderr": "", "stderr_lines": [], "stdout": "job.batch \"pumba-netem-fee6kcdpx5\" deleted", "stdout_lines": ["job.batch \"pumba-netem-fee6kcdpx5\" deleted"]}

TASK [Confirm that the pumba job is deleted successfully] **********************
task path: /chaoslib/pumba/network_chaos/network_chaos.yml:77
2019-12-11T10:17:17.572174 (delta: 1.041099)         elapsed: 85.010158 ******* 
changed: [127.0.0.1] => {"attempts": 1, "changed": true, "cmd": "kubectl get pods -l job-name=pumba-netem-fee6kcdpx5 --no-headers -n default", "delta": "0:00:00.977858", "end": "2019-12-11 10:17:18.714633", "rc": 0, "start": "2019-12-11 10:17:17.736775", "stderr": "No resources found.", "stderr_lines": ["No resources found."], "stdout": "", "stdout_lines": []}

TASK [Verify AUT liveness post fault-injection] ********************************
task path: /experiments/generic/pod_network_loss/pod_network_loss_ansible_logic.yml:52
2019-12-11T10:17:18.822931 (delta: 1.250695)         elapsed: 86.260915 ******* 
included: /utils/common/status_app_pod.yml for 127.0.0.1

TASK [Checking whether application pods are in running state] ******************
task path: /utils/common/status_app_pod.yml:2
2019-12-11T10:17:18.948364 (delta: 0.125368)         elapsed: 86.386348 ******* 
changed: [127.0.0.1] => {"attempts": 1, "changed": true, "cmd": "kubectl get pods -n default -l run=busybox -o custom-columns=:.status.phase --no-headers", "delta": "0:00:00.966266", "end": "2019-12-11 10:17:20.290923", "rc": 0, "start": "2019-12-11 10:17:19.324657", "stderr": "", "stderr_lines": [], "stdout": "Running", "stdout_lines": ["Running"]}

TASK [Get the container status of application.] ********************************
task path: /utils/common/status_app_pod.yml:9
2019-12-11T10:17:20.343523 (delta: 1.395094)         elapsed: 87.781507 ******* 
changed: [127.0.0.1] => {"attempts": 1, "changed": true, "cmd": "kubectl get pod -n default -l run=busybox --no-headers -o jsonpath='{.items[*].status.containerStatuses[*].ready}' | tr ' ' '\\n' | uniq", "delta": "0:00:00.735812", "end": "2019-12-11 10:17:21.255466", "rc": 0, "start": "2019-12-11 10:17:20.519654", "stderr": "", "stderr_lines": [], "stdout": "true", "stdout_lines": ["true"]}

TASK [set_fact] ****************************************************************
task path: /experiments/generic/pod_network_loss/pod_network_loss_ansible_logic.yml:59
2019-12-11T10:17:21.312015 (delta: 0.968426)         elapsed: 88.749999 ******* 
ok: [127.0.0.1] => {"ansible_facts": {"flag": "pass"}, "changed": false}

TASK [include_tasks] ***********************************************************
task path: /experiments/generic/pod_network_loss/pod_network_loss_ansible_logic.yml:70
2019-12-11T10:17:21.376955 (delta: 0.064884)         elapsed: 88.814939 ******* 
included: /utils/runtime/update_chaos_result_resource.yml for 127.0.0.1

TASK [Generate the chaos result CR to reflect SOT (Start of Test)] *************
task path: /utils/runtime/update_chaos_result_resource.yml:3
2019-12-11T10:17:21.451300 (delta: 0.074275)         elapsed: 88.889284 ******* 
skipping: [127.0.0.1] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [Apply the chaos result CR] ***********************************************
task path: /utils/runtime/update_chaos_result_resource.yml:13
2019-12-11T10:17:21.496978 (delta: 0.045617)         elapsed: 88.934962 ******* 
skipping: [127.0.0.1] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [Update the chaos result CR to reflect EOT (End of Test)] *****************
task path: /utils/runtime/update_chaos_result_resource.yml:23
2019-12-11T10:17:21.540970 (delta: 0.043921)         elapsed: 88.978954 ******* 
changed: [127.0.0.1] => {"changed": true, "checksum": "656a55054278835660e45a986d24ee17a6fbee37", "dest": "./chaos-result.yaml", "gid": 0, "group": "root", "md5sum": "9a80d8ef324ebab7825c5b992fbed785", "mode": "0644", "owner": "root", "size": 309, "src": "/root/.ansible/tmp/ansible-tmp-1576059441.6-234037529677911/source", "state": "file", "uid": 0}

TASK [Apply the chaos result CR] ***********************************************
task path: /utils/runtime/update_chaos_result_resource.yml:33
2019-12-11T10:17:23.036413 (delta: 1.495371)         elapsed: 90.474397 ******* 
changed: [127.0.0.1] => {"changed": true, "cmd": "kubectl apply -f chaos-result.yaml -n default", "delta": "0:00:00.930857", "end": "2019-12-11 10:17:24.337743", "failed_when_result": false, "rc": 0, "start": "2019-12-11 10:17:23.406886", "stderr": "", "stderr_lines": [], "stdout": "chaosresult.litmuschaos.io/engine-pod-network-loss configured", "stdout_lines": ["chaosresult.litmuschaos.io/engine-pod-network-loss configured"]}
META: ran handlers
META: ran handlers

PLAY RECAP *********************************************************************
127.0.0.1                  : ok=29   changed=17   unreachable=0    failed=0   

2019-12-11T10:17:24.377685 (delta: 1.341203)         elapsed: 91.815669 ******* 
=============================================================================== 

Copy link
Contributor

@nsathyaseelan nsathyaseelan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@ksatchit ksatchit requested a review from rahulchheda December 11, 2019 14:34
Copy link
Member

@rahulchheda rahulchheda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm!

@ksatchit ksatchit merged commit 0c7d75f into litmuschaos:master Dec 12, 2019
@ksatchit ksatchit deleted the tc_image_usage branch December 12, 2019 13:55
ksatchit pushed a commit that referenced this pull request Dec 13, 2019
…rs for network chaos (#991)

* (refactor)use newer pumba image with spec changes

Signed-off-by: ksatchit <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants