Back

Kubernetes

AWS EKS Cleanup: Find Over-Provisioned Node Groups

AWS EKS cleanup should begin with the node group that no longer matches its workloads. The expensive part may be instance size, pod requests, daemonset overhead, or an autoscaling rule that was tuned for traffic that no longer exists.

The useful output is a Kubernetes cleanup pull request or runbook entry that shows owners, metrics, PVC handling, and rollback commands. Keep the review concrete: Right-size requests and limits before removing capacity when workloads still matter, then make the next action visible to the team that owns the risk. That matters because the cleanup can still go wrong when under-sizing a cluster that has bursty workloads.

Key takeaways

  • Treat each cleanup candidate as an owned system with dependencies, not anonymous clutter.
  • Use a window long enough to include batch schedules, traffic peaks, and deployment cycles before deciding that “quiet” means “unused.”
  • Prefer reversible changes first when under-sizing a cluster that has bursty workloads is still plausible.
  • Leave behind a Kubernetes cleanup pull request or runbook entry that shows owners, metrics, PVC handling, and rollback commands so the next review starts with context.
  • Measure the result as lower spend, lower risk, less operational drag, or clearer ownership.

Map the Workload Boundary

Start with one EKS cluster and node group family where workload requests, daemonsets, autoscaling settings, and instance choices are visible together. The best cleanup scope is small enough that owners can answer quickly but wide enough to include the attachments that make removal risky.

FieldWhy it matters
OwnerCleanup needs a person or team that can accept the decision
Current purposeA short reason to keep the item, written in present tense
Last meaningful usenamespace age, pod activity, volume mounts, ingress traffic, and owner labels
Dependency evidencecluster metrics, events, manifests, Git history, and workload owners
Risk if wrongThe outage, data loss, access failure, or rollback gap the review must avoid
Next actionKeep, reduce, archive, disable, remove, or investigate

Do not make the inventory larger than the decision. A short list with owners and evidence beats a perfect spreadsheet that nobody is willing to act on.

Cluster Evidence to Trust

The useful question is not “how old is it?” It is “what would break, become harder to recover, or lose accountability if this disappeared?” For AWS EKS cleanup, collect enough evidence to answer that without relying on naming conventions.

CheckWhat to look forCleanup signal
Workload demandPod requests, actual usage, pending pods, disruption budgets, and scheduled jobsAllocated capacity is consistently above real demand
Namespace ownershipLabels, manifests, team docs, service catalog, and recent deploysNo owner can justify the namespace or workload
Persistent statePVCs, backups, database links, and restore expectationsData is disposable or already retained elsewhere
Traffic and autoscalingIngress requests, HPA behavior, node pressure, and burst patternsReducing capacity will not break expected peaks

Use several signals together. Activity can miss monthly jobs and incident-only paths. Ownership can be stale. Cost can distract from security or recovery risk. The strongest case combines runtime data, dependency checks, owner review, and a rollback plan.

If the evidence conflicts, label the item “investigate” with a named owner and review date. That is still progress because the next review starts with a narrower question.

Example Evidence Check

Use this as a quick cluster scan, then compare requests, limits, PVCs, HPAs, and scheduled jobs before changing capacity.

kubectl get nodes -o wide
kubectl top nodes
kubectl get namespaces --show-labels
kubectl get pvc --all-namespaces

Treat the output as a candidate list. Do not pipe these checks into delete commands; add owner review, dependency checks, and a rollback path first.

Right-Size Before You Delete

Use the least permanent move that proves the decision. In AWS EKS cleanup, removal is only one possible outcome; reducing size, narrowing permission, shortening retention, archiving, or disabling a trigger may produce the same benefit with less risk.

  • Right-size requests and limits before removing capacity when workloads still matter.
  • Quarantine or scale down stale namespaces before deleting PVC-backed resources.
  • Drain and resize node pools with workload scheduling and disruption budgets visible.

Track the cleanup candidate with a simple priority score:

ScoreGood signBad sign
ImpactMeaningful spend, risk, toil, noise, or confusion disappearsThe item is cheap and low-risk but politically distracting
ConfidenceOwner, purpose, and dependency path are understoodThe team is guessing from age or name
ReversibilityRestore, recreate, re-enable, or rollback path existsDeletion would be the first real test
PreventionA rule can stop recurrenceThe same pattern will return next month

Start with high-impact, high-confidence, reversible candidates. Defer confusing items only if they get an owner and a date; otherwise “defer” becomes another word for keeping waste permanently.

Kubernetes Cases That Need Patience

Some cleanup candidates are supposed to look quiet. Do not rush these cases:

  • CronJobs, batch workloads, and month-end processing that make average utilization misleading.
  • PVCs whose data is more important than the pod that mounted them.
  • Autoscaling settings tuned for bursty workloads or incident response.

For these cases, use a longer observation window, explicit owner approval, and a staged reduction. The point is not to avoid cleanup; it is to avoid making the first proof of dependency an outage.

Run the Cluster Review

Run AWS EKS cleanup as a decision review, not an open-ended hygiene project.

  1. Pick the narrow scope and export the candidate list.
  2. Add owner, current purpose, last-use evidence, dependency checks, and risk if wrong.
  3. Remove obvious false positives, then ask owners to choose keep, reduce, archive, disable, remove, or investigate.
  4. Apply the least permanent useful change first.
  5. Watch the signals that would reveal a bad decision.
  6. Complete the final removal only after the review window closes.
  7. Save a Kubernetes cleanup pull request or runbook entry that shows owners, metrics, PVC handling, and rollback commands.

For broader cleanup planning, use the cleanup library to pair this guide with related notes. Use the main cloud cost checklist to decide whether the cleanup work has enough upside for a focused sprint. For infrastructure cleanup, the main cloud cost optimization checklist is a useful companion.

Stop Cluster Waste Returning

Prevention should change the creation path, not just the cleanup path. For AWS EKS cleanup, the useful prevention fields are owner labels, expiry annotations, resource quotas, and regular namespace review. Make those fields part of normal creation and review.

  • Require owner labels, expiry annotations, and resource quotas for temporary namespaces.
  • Review node pool waste together with workload requests and autoscaling behavior.
  • Keep namespace retirement steps in Git so cleanup is reviewable.

The recurring review should be short: sort by impact, pick the unclear items, assign owners, and close the loop on anything nobody claims. If the review keeps producing the same class of candidate, fix the creation path instead of celebrating repeated cleanup.

Example Decision Record

Use a compact record so the cleanup can be reviewed later without reconstructing the whole investigation.

FieldExample entry for this cleanup
CandidateOversized EKS node groups in Kubernetes clusters
Why it looked staleLow recent activity, unclear owner, or no current consumer after the first review
Evidence checkedWorkload demand, Namespace ownership, and owner confirmation
First reversible moveRight-size requests and limits before removing capacity when workloads still matter
Watch signalThe metric, alert, job, route, query, or owner complaint that would show the cleanup was wrong
Final actionKeep, reduce, archive, disable, or remove after a window long enough to include batch schedules, traffic peaks, and deployment cycles
Prevention ruleRequire owner labels, expiry annotations, and resource quotas for temporary namespaces

This record is intentionally small. If the decision needs a long narrative, the candidate is probably not ready for removal yet. Keep investigating until the owner, evidence, reversible move, and prevention rule are clear.

FAQ

How often should teams do AWS EKS cleanup?

Use a window long enough to include batch schedules, traffic peaks, and deployment cycles for the first decision, then set a recurring cadence based on change rate. Fast-moving non-production systems may need monthly review; slower systems can be quarterly if every unclear item has an owner and a review date.

What is the safest first action?

The safest first action is usually ownership repair plus evidence collection. After that, right-size requests and limits before removing capacity when workloads still matter. That creates a visible test before permanent deletion.

What should not be removed quickly?

Do not rush anything connected to cronjobs, batch workloads, and month-end processing that make average utilization misleading. Also slow down when the cleanup affects recovery, compliance, customer-specific behavior, rare schedules, or security response.

How do you make the decision useful later?

Write the decision as a small operational record: candidate, owner, evidence, chosen action, watch signals, rollback path, final date, and prevention rule. That format helps future engineers, search engines, and AI assistants understand the cleanup without guessing.