|Type||Description||Tested K8s Platform|
|Generic||Drain the node where application pod is scheduled.||GKE, AWS, Packet(Kubeadm), Konvoy(AWS), EKS, AKS|
Ensure that Kubernetes Version > 1.16
Ensure that the Litmus Chaos Operator is running by executing
kubectl get podsin operator namespace (typically,
litmus). If not, install from here
Ensure that the
node-drainexperiment resource is available in the cluster by executing
kubectl get chaosexperimentsin the desired namespace. If not, install from here
Ensure that the node specified in the experiment ENV variable
TARGET_NODE(the node which will be drained) should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps:
- Get node names against the applications pods:
kubectl get pods -o wide
- Cordon the node
kubectl cordon <nodename>
- Get node names against the applications pods:
- Application pods are healthy on the respective Nodes before chaos injection
- Target nodes are in Ready state post chaos injection
- This experiment drains the node where application pod is running and verifies if it is scheduled on another available node.
- In the end of experiment it does an uncordon of the specified node so that it can be utilised in future.
- Drain node can be effected using the chaos library:
- The desired chaos library can be selected by setting
litmusas value for the env variable
Steps to Execute the Chaos Experiment
This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer Getting Started
Follow the steps in the sections below to prepare the ChaosEngine & execute the experiment.
Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
Sample Rbac Manifest
apiVersion: v1 kind: ServiceAccount metadata: name: node-drain-sa namespace: default labels: name: node-drain-sa app.kubernetes.io/part-of: litmus apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: node-drain-sa labels: name: node-drain-sa app.kubernetes.io/part-of: litmus rules: - apiGroups: [""] resources: ["pods","events"] verbs: ["create","list","get","patch","update","delete","deletecollection"] - apiGroups: [""] resources: ["pods/exec","pods/log","pods/eviction"] verbs: ["list","get","create"] - apiGroups: ["batch"] resources: ["jobs"] verbs: ["create","list","get","delete","deletecollection"] - apiGroups: ["apps"] resources: ["daemonsets"] verbs: ["list","get","delete"] - apiGroups: ["litmuschaos.io"] resources: ["chaosengines","chaosexperiments","chaosresults"] verbs: ["create","list","get","patch","update"] - apiGroups: [""] resources: ["nodes"] verbs: ["patch","get","list"] apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-drain-sa labels: name: node-drain-sa app.kubernetes.io/part-of: litmus roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: node-drain-sa subjects: - kind: ServiceAccount name: node-drain-sa namespace: default
Note: In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The
chaosServiceAccount can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found here.
- Provide the application info in
spec.appinfo. It is an optional parameter for infra level experiment.
- Provide the auxiliary applications info (ns & labels) in
- Override the experiment tunables if desired in
- To understand the values to provided in a ChaosEngine specification, refer ChaosEngine Concepts
Supported Experiment Tunables
|Variables||Description||Specify In ChaosEngine||Notes|
|TARGET_NODE||Name of the node to drain||Mandatory|
|NODE_LABEL||It contains node label, which will be used to filter the target nodes if TARGET_NODE ENV is not set||Optional|
|TOTAL_CHAOS_DURATION||The time duration for chaos insertion (seconds)||Optional||Defaults to 60s|
|LIB||The chaos lib used to inject the chaos||Optional|| Defaults to
|RAMP_TIME||Period to wait before and after injection of chaos in sec||Optional|
|INSTANCE_ID||A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.||Optional||Ensure that the overall length of the chaosresult CR is still < 64 characters|
Sample ChaosEngine Manifest
apiVersion: litmuschaos.io/v1alpha1 kind: ChaosEngine metadata: name: nginx-chaos namespace: default spec: # It can be active/stop engineState: 'active' #ex. values: ns1:name=percona,ns2:run=nginx auxiliaryAppInfo: '' chaosServiceAccount: node-drain-sa experiments: - name: node-drain spec: components: # nodeSelector: # # provide the node labels # kubernetes.io/hostname: 'node02' env: - name: TOTAL_CHAOS_DURATION value: '60' # enter the target node name - name: TARGET_NODE value: ''
Create the ChaosEngine Resource
Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
kubectl apply -f chaosengine.yml
If the chaos experiment is not executed, refer to the troubleshooting section to identify the root cause and fix the issues.
Watch Chaos progress
Set up a watch on the applications originally scheduled on the affected node and verify whether they are rescheduled on the other nodes in the Kubernetes Cluster.
watch kubectl get pods,nodes --all-namespaces
Check Chaos Experiment Result
Check whether the application is resilient to the node drain, once the experiment (job) is completed. The ChaosResult resource name is derived like this:
kubectl describe chaosresult nginx-chaos-node-drain -n <application-namespace>
Post Chaos Steps
In the beginning of experiment, we cordon the node so that chaos-pod won't schedule on the same node to which we are going to drained. The experiment itself uncordon the node in it's remedy/cleanup section. In case, experiment fails you can manually uncordon the application node using following commands:
kubectl uncordon <node-name>
Node Drain Experiment Demo
- A sample recording of this experiment execution is provided here.