Kubelet Service Kill Experiment Details
Experiment Metadata
Type | Description | Tested K8s Platform |
---|---|---|
Generic | Kills the kubelet service on the application node to check the resiliency. | GKE, EKS, Packet(Kubeadm), AKS |
Prerequisites
Ensure that the Litmus Chaos Operator is running by executing
kubectl get pods
in operator namespace (typically,litmus
). If not, install from hereEnsure that the
kubelet-service-kill
experiment resource is available in the cluster by executingkubectl get chaosexperiments
in the desired namespace. If not, install from hereEnsure that the node specified in the experiment ENV variable
TARGET_NODE
(the node for which kubelet service need to be killed) should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps:- Get node names against the applications pods:
kubectl get pods -o wide
- Cordon the node
kubectl cordon <nodename>
- Get node names against the applications pods:
Entry Criteria
- Application pods should be healthy before chaos injection.
Exit Criteria
- Application pods and the node should be healthy post chaos injection.
Details
- This experiment Causes the application to become unreachable on account of node turning unschedulable (NotReady) due to kubelet service kill.
- The kubelet service has been stopped/killed on a node to make it unschedulable for a certain duration i.e
TOTAL_CHAOS_DURATION
. The application node should be healthy after the chaos injection and the services should be reaccessable. - The application implies services. Can be reframed as: Test application resiliency upon replica getting unreachable caused due to kubelet service down.
- After experiment ends, you may manually uncordon the specified node so that it can be utilised in future use
kubectl uncordon <node-name>
.
Integrations
- Kubelet Service Kill can be effected using the chaos library:
litmus
- The desired chaos library can be selected by setting
litmus
as value for the env variableLIB
Steps to Execute the Chaos Experiment
This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer Getting Started
Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
Prepare chaosServiceAccount
- Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
Sample Rbac Manifest
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubelet-service-kill-sa
namespace: default
labels:
name: kubelet-service-kill-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubelet-service-kill-sa
labels:
name: kubelet-service-kill-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: [""]
resources: ["pods","events"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
- apiGroups: [""]
resources: ["pods/exec","pods/log"]
verbs: ["create","list","get"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-service-kill-sa
labels:
name: kubelet-service-kill-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubelet-service-kill-sa
subjects:
- kind: ServiceAccount
name: kubelet-service-kill-sa
namespace: default
Note: In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The chaosServiceAccount
can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found here.
Prepare ChaosEngine
- Provide the application info in
spec.appinfo
. It is an optional parameter for infra level experiment. - Provide the auxiliary applications info (ns & labels) in
spec.auxiliaryAppInfo
- Override the experiment tunables if desired in
experiments.spec.components.env
- To understand the values to provided in a ChaosEngine specification, refer ChaosEngine Concepts
Supported Experiment Tunables
Variables | Description | Specify In ChaosEngine | Notes |
---|---|---|---|
TARGET_NODE | Name of the node, to which kubelet service need to be killed | Mandatory | |
NODE_LABEL | It contains node label, which will be used to filter the target nodes if TARGET_NODE ENV is not set | Optional | |
TOTAL_CHAOS_DURATION | The time duration for chaos insertion (seconds) | Optional | Defaults to 90 |
LIB | The chaos lib used to inject the chaos | Optional | Defaults to litmus |
LIB_IMAGE | The lib image used to inject kubelet kill chaos the image should have systemd installed in it. | Optional | Defaults to ubuntu:16.04 |
RAMP_TIME | Period to wait before & after injection of chaos in sec | Optional | |
INSTANCE_ID | A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name. | Optional | Ensure that the overall length of the chaosresult CR is still < 64 characters |
Sample ChaosEngine Manifest
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
chaosServiceAccount: kubelet-service-kill-sa
experiments:
- name: kubelet-service-kill
spec:
components:
# nodeSelector:
# # provide the node labels
# kubernetes.io/hostname: 'node02'
env:
- name: TOTAL_CHAOS_DURATION
value: '60' # in seconds
# provide the target node name
- name: TARGET_NODE
value: ''
Create the ChaosEngine Resource
Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
kubectl apply -f chaosengine.yml
If the chaos experiment is not executed, refer to the troubleshooting section to identify the root cause and fix the issues.
Watch Chaos progress
- Setting up a watch over the nodes getting not schedulable in the Kubernetes Cluster
watch kubectl nodes
Check Chaos Experiment Result
Check whether the application is resilient after the kubelet service kill, once the experiment (job) is completed. The ChaosResult resource name is derived like this:
<ChaosEngine-Name>-<ChaosExperiment-Name>
.kubectl describe chaosresult nginx-chaos-kubelet-service-kill -n <application-namespace>
Post Chaos Steps
In the beginning of experiment, we cordon the node so that chaos-pod won't schedule on the same node (to which we are going kill the kubelet service) to ensure that the chaos pod will not scheduled on it / subjected to eviction After experiment ends you can manually uncordon the application node so that it can be utilised in future.
kubectl uncordon <node-name>
Kubelet Service Kill Demo [TODO]
- A sample recording of this experiment execution is provided here.