|Type||Description||Tested K8s Platform|
|OpenEBS||Induce network loss into the cStor pool pod||GKE, EKS|
Note: In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
Ensure that the Kubernetes Cluster uses Docker runtime
Ensure that the Litmus Chaos Operator is running by executing
kubectl get podsin operator namespace (typically,
litmus). If not, install from here
Ensure that the
openebs-pool-network-lossexperiment resource is available in the cluster by executing
kubectl get chaosexperimentsin the specificed namespace. If not, install from here
The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for
For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
apiVersion: v1 kind: ConfigMap metadata: name: openebs-pool-network-loss data: parameters.yml: | dbuser: root dbpassword: k8sDem0 dbname: test
- For Busybox data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
apiVersion: v1 kind: ConfigMap metadata: name: openebs-pool-network-loss data: parameters.yml: | blocksize: 4k blockcount: 1024 testfile: exampleFile
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the
openebsnamespace while performing application health checks in its respective namespace.
- Application pods are healthy before chaos injection
- Application writes are successful on OpenEBS PVs
- Application pods are healthy post chaos injection
- OpenEBS Storage pool pods are healthy
If the experiment tunable DATA_PERSISTENCE is set to 'mysql' or 'busybox':
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
- This scenario validates the behaviour of stateful applications and OpenEBS data plane upon high latencies/network loss in accessing the storage replicas pod
- Injects network loss on the specified container in the controller pod by starting a traffic control
netemrules to add packet loss
- Network loss is injected via pumba library with command
pumba netem lossby passing the relevant network interface, network loss, chaos duration and regex filter for container name
- Can test the stateful application's resilience to loss/slow iSCSI connections
- Network loss is achieved using the
pumbachaos library in case of docker runtime. Support for other other runtimes via tc direct invocation of
tcwill be added soon.
- The desired lib image can be configured in the env variable
Steps to Execute the Chaos Experiment
This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer Getting Started
Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app)namespace. This example consists of the minimum necessary cluster role permissions to execute the experiment.
Sample Rbac Manifest
apiVersion: v1 kind: ServiceAccount metadata: name: pool-network-loss-sa namespace: default labels: name: pool-network-loss-sa # Source: openebs/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pool-network-loss-sa labels: name: pool-network-loss-sa rules: - apiGroups: ["","apps","litmuschaos.io","batch","extensions","storage.k8s.io","openebs.io"] resources: ["pods","pods/exec","pods/log","events","jobs","configmaps","services","persistentvolumeclaims","storageclasses","persistentvolumes","chaosengines","chaosexperiments","chaosresults","cstorpools","cstorvolumereplicas","replicasets"] verbs: ["create","list","get","patch","update","delete"] apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: pool-network-loss-sa labels: name: pool-network-loss-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pool-network-loss-sa subjects: - kind: ServiceAccount name: pool-network-loss-sa namespace: default
- Provide the application info in
- Provide the auxiliary applications info (ns & labels) in
- Override the experiment tunables if desired in
- Provide the configMaps and secrets in
experiments.spec.components.configMaps/secrets, For more info refer Sample ChaosEngine
- To understand the values to provided in a ChaosEngine specification, refer ChaosEngine Concepts
Supported Experiment Tunables
|Variables||Description||Specify In ChaosEngine||Notes|
|APP_PVC||The PersistentVolumeClaim used by the stateful application||Mandatory||PVC may use either OpenEBS cStor storage class|
|TC_IMAGE||Image used for traffic control in linux||Optional||default value is `gaiadocker/iproute2`|
|LIB_IMAGE||chaos library image used to inject the network loss||Optional||Defaults to `gaiaadm/pumba:0.6.5`. Supported: `docker : gaiaadm/pumba:0.6.5`|
|TOTAL_CHAOS_DURATION||Total duration for which network loss is injected||Optional||Defaults to 120 seconds|
|NETWORK_PACKET_LOSS_PERCENTAGE||Total percentage for which network loss is injected||Optional||Defaults to 100 (percent)|
|DATA_PERSISTENCE||Flag to perform data consistency checks on the application||Optional||Default value is disabled (empty/unset). It supports only `mysql` and `busybox`. Ensure configmap with app details are created|
|INSTANCE_ID||A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.||Optional||Ensure that the overall length of the chaosresult CR is still < 64 characters|
Sample ChaosEngine Manifest
apiVersion: litmuschaos.io/v1alpha1 kind: ChaosEngine metadata: name: pool-chaos namespace: default spec: auxiliaryAppInfo: '' # It can be active/stop engineState: 'active' appinfo: appns: 'default' applabel: 'app=nginx' appkind: 'deployment' chaosServiceAccount: pool-network-loss-sa experiments: - name: openebs-pool-network-loss spec: components: env: - name: TOTAL_CHAOS_DURATION value: '120' # in seconds - name: APP_PVC value: 'demo-nginx-claim' - name: NETWORK_PACKET_LOSS_PERCENTAGE value: '100'
Create the ChaosEngine Resource
Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
kubectl apply -f chaosengine.yml
If the chaos experiment is not executed, refer to the troubleshooting section to identify the root cause and fix the issues.
Watch Chaos progress
View network loss in action by setting up a ping to the storage controller in the OpenEBS namespace
Watch the behaviour of the application pod and the OpenEBS data replica/pool pods by setting up in a watch on the respective namespaces
watch -n 1 kubectl get pods -n <application-namespace>
Check Chaos Experiment Result
Check whether the application is resilient to the pool network loss, once the experiment (job) is completed. The ChaosResult resource naming convention is:
kubectl describe chaosresult pool-chaos-openebs-pool-network-loss -n <application-namespace>
OpenEBS Pool Network Loss Demo [TODO]
- A sample recording of this experiment execution is provided here.