Kubernetes
Kubernetes Namespace Cleanup: Delete Stale Environments Safely
Kubernetes namespace cleanup should start at the workload boundary, not the node bill. Node pools, namespaces, PVCs, HPAs, and ingress rules can each look wasteful until you connect them to deployment patterns and ownership.
The useful output is a Kubernetes cleanup pull request or runbook entry that shows owners, metrics, PVC handling, and rollback commands. Keep the review concrete: right-size requests and limits before removing capacity when workloads still matter, then make the next action visible to the team that owns the risk. That matters because namespace deletion is a broad action: it can remove workloads, secrets, config, jobs, service accounts, PVC references, and operational history in one move.
Key takeaways
- Treat each cleanup candidate as an owned system with dependencies, not anonymous clutter.
- Use a window long enough to include batch schedules, traffic peaks, and deployment cycles before deciding that “quiet” means “unused.”
- Prefer reversible changes first when removing a namespace that still owns data is still plausible.
- Leave behind a Kubernetes cleanup pull request or runbook entry that shows owners, metrics, PVC handling, and rollback commands so the next review starts with context.
- Measure the result as lower spend, lower risk, less operational drag, or clearer ownership.
Start With Namespace Intent
Do not begin by sorting namespaces by age. Begin by sorting them by intent. A six-month-old namespace named preview-1827 may be safer to remove than a three-week-old namespace that runs a quarterly billing export.
The first pass should answer three questions:
- Was this namespace created for a durable product surface, a temporary environment, a migration, a one-off test, or an incident?
- Which team can approve a staged retirement?
- Which objects would make deletion dangerous even if no pods are running now?
Use a compact inventory so the review stays close to the decision.
| Field | Why it matters |
|---|---|
| Namespace pattern | Temporary preview, team sandbox, durable service, migration, or unknown |
| Owner signal | owner, app.kubernetes.io/part-of, service catalog entry, Git path, or recent deploy author |
| Live objects | Deployments, StatefulSets, Jobs, CronJobs, Ingresses, Services, NetworkPolicies, Secrets, and ServiceAccounts |
| State attachment | PVCs, database credentials, object storage prefixes, or backup references |
| External entry point | DNS record, ingress host, gateway route, webhook target, or allowlist |
| Proposed action | Keep, label, scale down, quarantine, archive manifests, or delete after review |
Namespaces with no owner are not automatically safe to delete. They are a governance problem first. The cleanup path is to make the unknowns visible, assign a reviewer, and choose a reversible test.
Evidence That A Namespace Is Actually Stale
The useful question is not “has anything run recently?” It is “would deleting this namespace remove a current capability, a recovery path, or data someone expects to exist?”
Collect evidence from the cluster and from the systems around it. Kubernetes can show objects, status, events, labels, and volume claims. It cannot prove that a forgotten DNS record, webhook, runbook, or customer escalation path is irrelevant.
| Check | What to look for | Cleanup signal |
|---|---|---|
| Workloads | Deployments scaled to zero, finished Jobs, suspended CronJobs, or no ReplicaSets with recent pods | Runtime demand is gone or intentionally paused |
| Routes | Ingress hosts, Gateway API routes, Services of type LoadBalancer, DNS records, and synthetic checks | No external path still points at the namespace |
| State | PVCs, StatefulSets, backup annotations, restore runbooks, and data export jobs | Data is disposable, migrated, or retained elsewhere |
| Identity | ServiceAccounts, RoleBindings, image pull secrets, and external IAM mappings | No automation still depends on namespace-local credentials |
| Change history | Git manifests, Helm releases, Argo CD or Flux status, and recent deploy commits | The namespace has no current deployment owner |
| Operations | Alerts, dashboards, runbooks, SLO pages, and incident references | On-call tooling will not break or lose context |
Use several signals together. A namespace with no running pods can still own a PVC. A namespace with no ingress can still run internal jobs. A namespace with no recent deploy can still be a recovery target during an incident.
If the evidence conflicts, label the namespace cleanup.unweed.dev/status=investigate or use your own label convention with an owner and review date. That is still progress because the next review starts with a narrower question.
Read-Only Namespace Scan
Use a read-only scan to collect the first evidence set. The current kubectl documentation supports get with wide output, --all-namespaces, label selectors, field selectors, and JSON output, plus describe for detailed resource inspection.
kubectl get namespaces --show-labels
kubectl get all,ingress,networkpolicy,serviceaccount,rolebinding -n $NAMESPACE
kubectl get pvc,configmap,secret,cronjob,job -n $NAMESPACE
kubectl describe namespace $NAMESPACE
kubectl get events -n $NAMESPACE --sort-by=.lastTimestamp
This output proves what the API server currently knows about the namespace. It does not prove that DNS, deployment automation, backup policy, or incident runbooks no longer reference it. Keep those checks in the review before deletion.
Choose A Reversible Test
Namespace deletion should rarely be the first move. Use the smallest action that tests whether the namespace is still needed.
- Add explicit owner and expiry labels so the review has accountability.
- Scale Deployments to zero when workloads are stateless and owners agree.
- Suspend CronJobs before deleting them when schedules are rare or business-periodic.
- Remove external routing before removing internal objects when traffic is the main risk.
- Snapshot or export state before removing PVC-backed workloads.
- Archive rendered manifests from Helm, Kustomize, Argo CD, or Flux so recreation is possible.
Track the cleanup candidate with a simple priority score:
| Score | Good sign | Bad sign |
|---|---|---|
| Impact | Meaningful spend, risk, toil, noise, or confusion disappears | The item is cheap and low-risk but politically distracting |
| Confidence | Owner, purpose, and dependency path are understood | The team is guessing from age or name |
| Reversibility | Restore, recreate, re-enable, or rollback path exists | Deletion would be the first real test |
| Prevention | A rule can stop recurrence | The same pattern will return next month |
Start with high-impact, high-confidence, reversible candidates. Defer confusing items only if they get an owner and a date; otherwise “defer” becomes another word for keeping waste permanently.
Cases That Need Patience
Some cleanup candidates are supposed to look quiet. Do not rush these cases:
- Preview namespaces tied to open pull requests where deploy automation has stalled but the review is still active.
- Incident, migration, or recovery namespaces that are intentionally idle until a narrow event occurs.
- CronJobs that run weekly, monthly, quarter-end, or after upstream file delivery.
- Namespaces with PVCs, even when the pods that used them are gone.
- Namespaces containing secrets or service accounts used by external automation.
For these cases, use a longer observation window, explicit owner approval, and a staged reduction. The point is not to avoid cleanup; it is to avoid making the first proof of dependency an outage.
Run The Namespace Review
Run Kubernetes namespace cleanup as a decision review, not an open-ended hygiene project.
- Pick the narrow scope and export the candidate list.
- Add owner, intended purpose, object inventory, route checks, state checks, and risk if wrong.
- Remove obvious false positives such as active environments, current release namespaces, and namespaces managed by platform automation.
- Ask owners to choose keep, label, scale down, quarantine, archive, delete, or investigate.
- Apply the least permanent useful change first and record the watch signal.
- Complete deletion only after the review window covers the namespace’s real schedule.
- Save the evidence and rollback path in a pull request, service catalog note, or runbook.
For broader cleanup planning, use the cleanup library to pair this guide with related notes. Use the main cloud cost checklist to decide whether the cleanup work has enough upside for a focused sprint. For infrastructure cleanup, the main cloud cost optimization checklist is a useful companion.
Prevent Stale Namespaces At Creation Time
Prevention should change the namespace creation path, not just add another cleanup meeting. Temporary namespaces need metadata and limits before they exist.
- Require
owner,purpose,environment, andexpires-atlabels or annotations for preview, sandbox, and migration namespaces. - Apply resource quotas and limit ranges when a namespace is created, not after a cost spike.
- Make GitOps templates include a retirement note for non-production namespaces.
- Block long-lived namespaces without a service catalog entry or owning team.
- Put namespace age, owner, and expiry into the platform dashboard that teams already use.
The recurring review should be short: sort by impact, pick the unclear items, assign owners, and close the loop on anything nobody claims. If the review keeps producing the same class of candidate, fix the creation path instead of celebrating repeated cleanup.
Example Decision Record
Use a compact record so the cleanup can be reviewed later without reconstructing the whole investigation.
| Field | Example entry for this cleanup |
|---|---|
| Candidate | Stale namespaces in Kubernetes clusters |
| Why it looked stale | No recent deploy, no running pods, expired preview label, or missing service catalog entry |
| Evidence checked | Objects, ingress routes, PVCs, CronJobs, service accounts, Git manifests, and owner confirmation |
| First reversible move | Label for cleanup review, remove external route, suspend jobs, or scale stateless workloads to zero |
| Watch signal | 404s, failed synthetic checks, missed batch output, deploy automation errors, or owner complaint |
| Final action | Archive manifests and delete only after the review window covers the namespace’s schedule |
| Prevention rule | Require owner, purpose, expiry, resource quota, and retirement path during namespace creation |
This record is intentionally small. If the decision needs a long narrative, the candidate is probably not ready for removal yet. Keep investigating until the owner, evidence, reversible move, and prevention rule are clear.
FAQ
How often should teams review Kubernetes namespaces?
Use a window long enough to include batch schedules, traffic peaks, and deployment cycles for the first decision, then set a recurring cadence based on change rate. Fast-moving non-production systems may need monthly review; slower systems can be quarterly if every unclear item has an owner and a review date.
What is the safest first action for a stale namespace?
The safest first action is usually ownership repair plus evidence collection. After that, remove or pause the highest-confidence external dependency first, such as an unused preview route, before deleting workloads or state.
What should not be removed quickly?
Do not rush namespaces with PVCs, rare CronJobs, incident tooling, migration state, service accounts used by automation, or external routes that might still receive customer or partner traffic.
How do you make the decision useful later?
Write the decision as a small operational record: candidate, owner, evidence, chosen action, watch signals, rollback path, final date, and prevention rule. That format helps future engineers, search engines, and AI assistants understand the cleanup without guessing.