Good news! Deleting a pod from a Kubernetes node using the kubectl commands is a straightforward process. Whether you need to troubleshoot issues with a node, complete an upgrade, or reduce the size of your cluster, you will find that deleting pods is not a difficult task.
However, before deleting a pod, you should complete a series of steps to smooth the process for the application. If you rush this process, it may lead to mistakes and application downtime. Let’s dive in!
How to remove all pods from a node at once
If your node is running without any stateful pod, or pods that are non-essential, you can simply remove all of the pods from the nod using the
kubectl drain command. But first, it is suggested that you double-check the name of the node you are removing from and to confirm that the pods on that node can be safely terminated. According to Kubernetes documentation, the following command will do the trick:
Next, run the following command to drain all of the pods from the node
You should run the get pods command again to confirm that no pods are still running on the node. If you are running pods with NoExecute, they will have remained on the node or if you are using DaemonSet pods.
To force all of the pods from the node you can run the drain command again, this time, with the --force flag included. Finally, you can use the
kubectl delete node <nodename> command to remove the node from the cluster.
How to individually remove pods from nodes
To remove pods using a finer approach you can elect to remove pods individually.
But first, using this method it is suggested that you again double-check the name of the node and the pods running on said node:
Next, you should use the
kubectl cordon <nodename> command to designate the node as not schedulable. The kubectl cordon command works to stop new pods from scheduling onto the node during the deletion process, and during normal maintenance.
At this point, you can manually delete pods one at a time using the
kubectl delete pod <podname> command.
However, if the pods in question are controlled by a Deployment or ReplicaSet, and you are concerned with having one less replicate, you may want to increase the replica count by the number of pods set to be deleted. Once the new pod is running, you can delete the old pod in question, and then scale the number of replicas lower.
As just one example, if your deployment needs 8 replicas, and one of the replicas is set to be deleted, you could temporarily scale to 9 replicas, and then back down to 8 replicas after the pods have been deleted.
For those pods that are controlled by a StatefulSet, before deletion, you may need to scale up the ReplicaSet to handle the increased demand as a result of a pod being deleted.
How to allow pods back onto nodes
Good news! We have now finished the maintenance on our nodes. Next, you will want to use the
kubectl uncordon command to allow for scheduling to take place on the node.
From here, pods will need to be scheduled again and they will start to appear back on the node. If you would prefer a serverless solution that handles scaling for you, consider Airplane.