Node IO Stress Experiment Details
Experiment Metadata
Type | Description | Tested K8s Platform |
---|---|---|
Generic | Give IO Disk Stress on the Kubernetes Node | GKE, EKS, Minikube, AKS |
Prerequisites
- Ensure that Kubernetes Version > 1.16
- Ensure that the Litmus Chaos Operator is running by executing
kubectl get pods
in operator namespace (typically,litmus
). If not, install from here - Ensure that the
node-io-stress
experiment resource is available in the cluster by executingkubectl get chaosexperiments
in the desired namespace. If not, install from here
Entry Criteria
- Application pods are healthy on the respective Nodes before chaos injection
Exit Criteria
- Application pods may or may not be healthy post chaos injection
Details
- This experiment causes io stress on the Kubernetes node. The experiment aims to verify the resiliency of applications that share this disk resource for ephemeral or persistent storage purposes.
- The amount of io stress can be either specifed as the size in percentage of the total free space on the file system or simply in Gigabytes(GB). When provided both it will execute with the utilization percentage specified and non of them are provided it will execute with default value of 10%.
- Tests application resiliency upon replica evictions caused due IO stress on the available Disk space.
Integrations
- Node IO Stress can be injected using the chaos library:
litmus
- This can be provided under under
LIB
variable
Steps to Execute the Chaos Experiment
This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer Getting Started
Follow the steps in the sections below to create the
chaosServiceAccount
, prepare the ChaosEngine & execute the experiment.
Prepare chaosServiceAccount
- Use this sample RBAC manifest to create a
chaosServiceAccount
in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
Sample Rbac Manifest
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-io-stress-sa
namespace: default
labels:
name: node-io-stress-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-io-stress-sa
labels:
name: node-io-stress-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: [""]
resources: ["pods","events"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
- apiGroups: [""]
resources: ["pods/exec","pods/log"]
verbs: ["create","list","get"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-io-stress-sa
labels:
name: node-io-stress-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-io-stress-sa
subjects:
- kind: ServiceAccount
name: node-io-stress-sa
namespace: default
Note: In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The chaosServiceAccount
can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found here.
Prepare ChaosEngine
- Provide the application info in
spec.appinfo
. It is an optional parameter for infra level experiment. - Provide the auxiliary applications info (ns & labels) in
spec.auxiliaryAppInfo
- Override the experiment tunables if desired in
experiments.spec.components.env
- To understand the values to provided in a ChaosEngine specification, refer ChaosEngine Concepts
Supported Experiment Tunables
Variables | Description | Specify in ChaosEngine | Notes |
---|---|---|---|
TARGET_NODES | Comma separated list of nodes, subjected to node io stress | Mandatory | |
NODE_LABEL | It contains node label, which will be used to filter the target nodes if TARGET_NODES ENV is not set | Optional | |
TOTAL_CHAOS_DURATION | The time duration for chaos (seconds) | Optional | Default to 120 |
FILESYSTEM_UTILIZATION_PERCENTAGE | Specify the size as percentage of free space on the file system | Optional | Default to 10% |
FILESYSTEM_UTILIZATION_BYTES | Specify the size in GigaBytes(GB). FILESYSTEM_UTILIZATION_PERCENTAGE & FILESYSTEM_UTILIZATION_BYTES are mutually exclusive. If both are provided, FILESYSTEM_UTILIZATION_PERCENTAGE is prioritized. |
Optional | |
CPU | Number of core of CPU to be used | Optional | Default to 1 |
NUMBER_OF_WORKERS | It is the number of IO workers involved in IO disk stress | Optional | Default to 4 |
VM_WORKERS | It is the number vm workers involved in IO disk stress | Optional | Default to 1 |
LIB | The chaos lib used to inject the chaos | Optional | Default to litmus |
LIB_IMAGE | Image used to run the stress command | Optional | Default to litmuschaos/go-runner:latest |
RAMP_TIME | Period to wait before and after injection of chaos in sec | Optional | |
NODES_AFFECTED_PERC | The Percentage of total nodes to target | Optional | Defaults to 0 (corresponds to 1 node), provide numeric value only |
SEQUENCE | It defines sequence of chaos execution for multiple target nodes | Optional | Default value: parallel. Supported: serial, parallel |
INSTANCE_ID | A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name. | Optional | Ensure that the overall length of the chaosresult CR is still < 64 characters |
Sample ChaosEngine Manifest
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
chaosServiceAccount: node-io-stress-sa
experiments:
- name: node-io-stress
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '120'
## specify the size as percentage of free space on the file system
- name: FILESYSTEM_UTILIZATION_PERCENTAGE
value: '10'
## Number of core of CPU
- name: CPU
value: '1'
## Total number of workers default value is 4
- name: NUMBER_OF_WORKERS
value: '4'
## percentage of total nodes to target
- name: NODES_AFFECTED_PERC
value: ''
# provide the comma separated target node names
- name: TARGET_NODES
value: ''
Create the ChaosEngine Resource
Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
kubectl apply -f chaosengine.yml
If the chaos experiment is not executed, refer to the troubleshooting section to identify the root cause and fix the issues.
Watch Chaos progress
View the status of the pods as they are subjected to IO disk stress.
watch -n 1 kubectl get pods -n <application-namespace>
Monitor the capacity filled up on the host filesystem
watch du -h
Abort/Restart the Chaos Experiment
To stop the pod-io-stress experiment immediately, either delete the ChaosEngine resource or execute the following command:
kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch '{"spec":{"engineState":"stop"}}'
To restart the experiment, either re-apply the ChaosEngine YAML or execute the following command:
kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch '{"spec":{"engineState":"active"}}'
Check Chaos Experiment Result
Check whether the application is resilient to the io stress, once the experiment (job) is completed. The ChaosResult resource name is derived like this:
<ChaosEngine-Name>-<ChaosExperiment-Name>
.kubectl describe chaosresult nginx-chaos-node-io-stress -n <application-namespace>
Node IO Stress Experiment Demo
- The Demo Video will be Added soon.