Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
If you’ve ever tried to delete a Kubernetes namespace, only to find it stuck in the Terminating
state for days (or even months), you’re not alone. This happens when finalizers—special hooks that ensure proper cleanup—fail to complete, leaving the namespace in a limbo state.
In this guide, I’ll show you how to force delete these stubborn namespaces safely using kubectl
and jq
.
A namespace enters the Terminating
state when:
In my case, several namespaces (like cattle-system
, harbor
, and fleet-default
) were stuck for over 60 days!
tmp.json
:
"spec": {}
section.finalizers
under metadata
.jq
)kubectl get namespace <namespace-name> -o json | \<br> jq '.spec = {}' | \<br> jq 'del(.metadata.finalizers)' | \<br> kubectl replace --raw "/api/v1/namespaces/<namespace-name>/finalize" -f -
If (like me) you have many stuck namespaces, run this loop:
for ns in cattle-fleet-system cattle-global-data cattle-global-nt cattle-impersonation-system cattle-system cattle-ui-plugin-system fleet-default fleet-local local p-bmq9l p-l8p84 harbor; do
kubectl get namespace $ns -o json | \
jq '.spec = {}' | \
jq 'del(.metadata.finalizers)' | \
kubectl replace --raw "/api/v1/namespaces/$ns/finalize" -f -
done
⚠️ Warning: Force-deleting namespaces skips proper cleanup.
Stuck namespaces can be frustrating, but with kubectl
and jq
, you can safely force-delete them. If this happens frequently, consider debugging the root cause (e.g., faulty operators or finalizers).
Have you encountered this issue? Share your experience in the comments!
🚀 Happy Kubernetes troubleshooting!