How to Force Delete Stuck “Terminating” Kubernetes Namespaces

Introduction

If you’ve ever tried to delete a Kubernetes namespace, only to find it stuck in the Terminating state for days (or even months), you’re not alone. This happens when finalizers—special hooks that ensure proper cleanup—fail to complete, leaving the namespace in a limbo state.

In this guide, I’ll show you how to force delete these stubborn namespaces safely using kubectl and jq.

Why Do Namespaces Get Stuck in “Terminating”?

A namespace enters the Terminating state when:

  • Finalizers block deletion (e.g., Kubernetes waits for a controller to clean up resources).
  • Orphaned resources exist but can’t be removed.
  • API server or etcd issues prevent proper deletion.

In my case, several namespaces (like cattle-systemharbor, and fleet-default) were stuck for over 60 days!

Solution: Force Delete Terminating Namespaces

Method 1: Manual JSON Patch

  1. Export the namespace definition:bashCopyDownloadkubectl get namespace <namespace-name> -o json > tmp.json
  2. Edit tmp.json:
    • Remove the "spec": {} section.
    • Delete all finalizers under metadata.
  3. Apply the changes:bashCopyDownloadkubectl replace –raw “/api/v1/namespaces/<namespace-name>/finalize” -f ./tmp.json

Method 2: Quick One-Liner (Requires jq)

kubectl get namespace <namespace-name> -o json | \<br>  jq '.spec = {}' | \<br>  jq 'del(.metadata.finalizers)' | \<br>  kubectl replace --raw "/api/v1/namespaces/<namespace-name>/finalize" -f -

Bulk Delete Multiple Stuck Namespaces

If (like me) you have many stuck namespaces, run this loop:

for ns in cattle-fleet-system cattle-global-data cattle-global-nt cattle-impersonation-system cattle-system cattle-ui-plugin-system fleet-default fleet-local local p-bmq9l p-l8p84 harbor; do
  kubectl get namespace $ns -o json | \
    jq '.spec = {}' | \
    jq 'del(.metadata.finalizers)' | \
    kubectl replace --raw "/api/v1/namespaces/$ns/finalize" -f -
done

Important Notes

⚠️ Warning: Force-deleting namespaces skips proper cleanup.

  • Orphaned resources may remain in the cluster.
  • Always verify if the namespace truly needs deletion.
  • Investigate why it got stuck (e.g., CRDs, webhooks, or controller issues).

Conclusion

Stuck namespaces can be frustrating, but with kubectl and jq, you can safely force-delete them. If this happens frequently, consider debugging the root cause (e.g., faulty operators or finalizers).

Have you encountered this issue? Share your experience in the comments!

🚀 Happy Kubernetes troubleshooting!

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top