Kubelet Service Kill Experiment Details
Experiment Metadata
Type | Description | Tested K8s Platform |
---|---|---|
Generic | Kills the kubelet service on the application node to check the resiliency. | GKE, EKS, Packet(Kubeadm), AKS |
Prerequisites
Ensure that the Litmus Chaos Operator is running by executing
kubectl get pods
in operator namespace (typically,litmus
). If not, install from hereEnsure that the
kubelet-service-kill
experiment resource is available in the cluster by executingkubectl get chaosexperiments
in the desired namespace. If not, install from hereEnsure that the node specified in the experiment ENV variable
APP_NODE
(the node for which kubelet service need to be killed) should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps:- Get node names against the applications pods:
kubectl get pods -o wide
- Cordon the node
kubectl cordon <nodename>
- Get node names against the applications pods:
Entry Criteria
- Application pods should be healthy before chaos injection.
Exit Criteria
- Application pods and the node should be healthy post chaos injection.
Details
- This experiment Causes the application to become unreachable on account of node turning unschedulable (NotReady) due to kubelet service kill.
- The kubelet service has been stopped/killed on a node to make it unschedulable for a certain duration i.e
TOTAL_CHAOS_DURATION
. The application node should be healthy after the chaos injection and the services should be reaccessable. - The application implies services. Can be reframed as: Test application resiliency upon replica getting unreachable caused due to kubelet service down.
- After experiment ends, you may manually uncordon the specified node so that it can be utilised in future use
kubectl uncordon <node-name>
.
Integrations
- Kubelet Service Kill can be effected using the chaos library:
litmus
- The desired chaos library can be selected by setting
litmus
as value for the env variableLIB
Steps to Execute the Chaos Experiment
This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer Getting Started
Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
Prepare chaosServiceAccount
- Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
Sample Rbac Manifest
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubelet-service-kill-sa
namespace: default
labels:
name: kubelet-service-kill-sa
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kubelet-service-kill-sa
labels:
name: kubelet-service-kill-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","apps"]
resources: ["pods","jobs","pods/log","events","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubelet-service-kill-sa
labels:
name: kubelet-service-kill-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubelet-service-kill-sa
subjects:
- kind: ServiceAccount
name: kubelet-service-kill-sa
namespace: default
Prepare ChaosEngine
- Provide the application info in
spec.appinfo
- Provide the auxiliary applications info (ns & labels) in
spec.auxiliaryAppInfo
- Override the experiment tunables if desired in
experiments.spec.components.env
- To understand the values to provided in a ChaosEngine specification, refer ChaosEngine Concepts
Supported Experiment Tunables
Variables | Description | Specify In ChaosEngine | Notes |
---|---|---|---|
APP_NODE | Name of the node, to which kubelet service need to be killed | Mandatory | |
TOTAL_CHAOS_DURATION | The time duration for chaos insertion (seconds) | Optional | Defaults to 90 |
LIB | The chaos lib used to inject the chaos | Optional | Defaults to `litmus` |
RAMP_TIME | Period to wait before & after injection of chaos in sec | Optional | |
INSTANCE_ID | A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name. | Optional | Ensure that the overall length of the chaosresult CR is still < 64 characters |
Sample ChaosEngine Manifest
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
# It can be true/false
annotationCheck: 'false'
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
chaosServiceAccount: kubelet-service-kill-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
experiments:
- name: kubelet-service-kill
spec:
components:
env:
- name: TOTAL_CHAOS_DURATION
value: '90' # in seconds
# provide the actual name of node under test
- name: APP_NODE
value: 'node-01'
Create the ChaosEngine Resource
Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
kubectl apply -f chaosengine.yml
If the chaos experiment is not executed, refer to the troubleshooting section to identify the root cause and fix the issues.
Watch Chaos progress
- Setting up a watch over the nodes getting not schedulable in the Kubernetes Cluster
watch kubectl nodes
Check Chaos Experiment Result
Check whether the application is resilient after the kubelet service kill, once the experiment (job) is completed. The ChaosResult resource name is derived like this:
<ChaosEngine-Name>-<ChaosExperiment-Name>
.kubectl describe chaosresult nginx-chaos-kubelet-service-kill -n <application-namespace>
Post Chaos Steps
In the beginning of experiment, we cordon the node so that chaos-pod won't schedule on the same node (to which we are going kill the kubelet service) to ensure that the chaos pod will not scheduled on it / subjected to eviction After experiment ends you can manually uncordon the application node so that it can be utilised in future.
kubectl uncordon <node-name>
Kubelet Service Kill Demo [TODO]
- A sample recording of this experiment execution is provided here.