Kubernetes
Kubernetes ConfigMap Cleanup: Prune Stale Configuration
Kubernetes ConfigMap cleanup is tricky because configuration can be stale in several different ways. A ConfigMap may be completely unused. It may be mounted by a Deployment that no longer receives traffic. It may contain old keys that no container reads anymore. It may also be kept intentionally because a rollback, CronJob, migration, or emergency procedure still expects the old values.
The cleanup decision should focus on references and behavior, not age. A ConfigMap created two years ago may still be the source of a stable production setting. A ConfigMap created last week may be leftover from a failed preview environment. The practical outcome is a reviewed change that removes unused ConfigMaps or keys while preserving the configuration that active workloads, scheduled jobs, and rollbacks still need.
This note is for platform teams and service owners who want fewer stale manifests, fewer confusing settings, and safer Kubernetes changes without treating every old ConfigMap as disposable.
Key Takeaways
- Separate unused ConfigMaps from unused keys inside still-active ConfigMaps.
- Check references from env vars,
envFrom, volumes, projected volumes, Jobs, CronJobs, and Git manifests. - Do not rush ConfigMaps used by rollbacks, migrations, feature flags, or scheduled workloads.
- Prefer key-level cleanup when the ConfigMap is still mounted by active workloads.
- Prevent recurrence by making configuration ownership, schema, and expiry part of the manifest review.
Classify the ConfigMap Before Changing It
ConfigMap cleanup gets safer when you name the type of staleness. The right action differs depending on whether the whole object is unused, only one key is obsolete, or the live object drifted from Git.
| ConfigMap state | Evidence to collect | Likely action |
|---|---|---|
| Unreferenced object | No pod template, Job, CronJob, Helm chart, Kustomize overlay, or operator points to it | Remove the manifest after owner review |
| Partially stale keys | Workload uses the ConfigMap, but code or startup logs never read specific keys | Remove keys in a small application change |
| Replaced config | New ConfigMap exists and deployments reference the replacement | Delete the old object after rollback window |
| Environment leftover | Namespace, release, or preview app is retired | Remove with the environment cleanup |
| Drifted live object | Cluster value differs from Git or release artifact | Reconcile through Git before cleanup |
| Shared platform config | Several services import the same values | Split ownership or document consumers before pruning |
This classification stops the review from becoming a generic “old things” list. It also helps reviewers choose the right blast radius. Removing a key from a shared ConfigMap can be riskier than removing an entire ConfigMap from an abandoned preview namespace.
Find Real References
The first evidence check is whether any workload template references the ConfigMap. Search the live cluster and the source manifests. The cluster tells you what is running now. Git tells you what the next deployment may recreate.
| Reference path | What to look for | Cleanup risk |
|---|---|---|
envFrom.configMapRef | Containers importing every key | Removing one key can break startup indirectly |
env.valueFrom.configMapKeyRef | Containers reading a specific key | Key-level cleanup needs code or manifest review |
| Volume mounts | Config files mounted as files | Apps may read the file only after a reload or rare request |
| Projected volumes | ConfigMap mixed with Secret or Downward API data | Ownership can be split across teams |
| Jobs and CronJobs | Scheduled commands that run rarely | Average pod activity can miss the dependency |
| Helm or Kustomize templates | ConfigMap recreated from release tooling | Manual live deletion will not stick |
| Operators | Custom controllers generating or consuming config | The owner may be the operator, not the application team |
Do not stop at kubectl get configmap. Kubernetes does not keep a simple “last read” timestamp for each ConfigMap key. You need to connect object references, workload behavior, source manifests, and owner knowledge.
Read-Only Kubectl Checks
Use read-only checks to map a ConfigMap to workloads before editing manifests. These commands are deliberately inspect-only.
kubectl get configmap -n $NAMESPACE --show-labels
kubectl describe configmap -n $NAMESPACE $CONFIGMAP_NAME
kubectl get deploy,statefulset,daemonset,job,cronjob -n $NAMESPACE -o yaml
kubectl get pods -n $NAMESPACE -o wide
kubectl describe pod -n $NAMESPACE $POD_NAME
The workload YAML and pod description help you find envFrom, configMapKeyRef, volume, and projected-volume references. The output does not prove whether an application still reads a key at runtime. For key-level cleanup, compare the manifest references with application code, startup logs, release notes, and the rollback plan.
For a large namespace, export the YAML once and search locally for the ConfigMap name. Keep the review artifact with the pull request so the owner can see exactly which references were checked.
Key-Level Cleanup Needs Application Evidence
Many stale ConfigMap problems live inside an active object. A service may still mount app-config, but only five of the twenty keys are read. Removing the whole ConfigMap would be wrong; leaving all twenty keys forever makes future changes harder because nobody knows which values matter.
Use application-specific evidence for key cleanup:
- Search code for environment variable names, config file keys, and framework binding names.
- Check startup logs for warnings about ignored, unknown, or deprecated settings.
- Compare the ConfigMap with the current sample config, chart values, or typed configuration schema.
- Look at recent deployment diffs to see when the replacement key was introduced.
- Confirm whether rollback releases still expect the old key.
If the app has a typed configuration schema, update the schema and the ConfigMap in the same review. If it does not, add a short note in the manifest explaining why a legacy key remains. That note is not a substitute for cleanup, but it prevents the next reviewer from repeating the same investigation.
Cases That Should Not Be Rushed
Some ConfigMaps deserve a slower path even when they look unused.
- Rollback configuration for the previous production release, especially when deployments can roll back without rebuilding manifests.
- Migration settings used by one-time Jobs that are still part of a recovery or data repair runbook.
- CronJob configuration for monthly, quarterly, or event-driven tasks.
- Feature flag defaults that are read only when a remote flag service is unavailable.
- Config generated by operators, service meshes, admission controllers, or platform automation.
- Shared CA bundles, proxy settings, or endpoint maps that many workloads import indirectly.
For these cases, the first move is usually to document the owner and review date. Then remove the dependency from the workload or runbook before deleting the ConfigMap. Deleting first only proves the dependency by causing a failure.
Pick the Cleanup Move
Choose the move that matches the evidence. Avoid turning a key-level cleanup into an object deletion because the object name looks old.
| Evidence | Safer first move | Final move |
|---|---|---|
| No live or Git references | Open a pull request removing the manifest | Delete the object after sync and review |
| Active workload references only some keys | Remove unused keys in app and manifest together | Add schema or config validation |
| Replacement ConfigMap is already deployed | Keep old object through rollback window | Remove old object after release settles |
| ConfigMap belongs to retired namespace | Include it in namespace retirement record | Remove with namespace cleanup |
| Ownership is unclear | Assign owner and current purpose | Revisit on a dated review |
| Operator manages the object | Update operator source or custom resource | Let reconciliation remove generated config |
The cleanup pull request should include the candidate, reference evidence, owner approval, rollout plan, rollback plan, and watch signals. Watch startup failures, crash loops, missing file errors, and application-specific config validation logs after the change.
Prevent Stale ConfigMaps From Returning
Prevention should move into the creation and review path. ConfigMaps become stale when teams can add keys without naming an owner, purpose, lifecycle, or schema.
Require these fields or conventions for new ConfigMaps:
| Prevention rule | Example | Why it helps |
|---|---|---|
| Owner label | platform.example.com/owner: search | Gives reviewers a team to ask |
| Purpose annotation | platform.example.com/purpose: ranking-service-runtime-config | Distinguishes active runtime config from leftover data |
| Lifecycle annotation | platform.example.com/expires: 2026-06-30 | Forces temporary config into review |
| Source of truth | platform.example.com/source: helm-values | Tells reviewers where cleanup must happen |
| Key naming convention | RANKING_CACHE_TTL_SECONDS | Makes code search and schema review easier |
| Schema or sample config | config.schema.json or typed settings module | Lets CI catch unknown and removed keys |
For temporary environments, generate ConfigMaps from the same lifecycle controller that creates the namespace, route, and workload. For production services, require configuration changes to happen with the application change that consumes them. That prevents “preparing config” from becoming permanent clutter.
Decision Record
Use a short record so future reviewers can understand why a ConfigMap or key disappeared.
| Field | Example entry |
|---|---|
| Candidate | checkout-runtime-config key LEGACY_TAX_ENDPOINT |
| Why it looked stale | Replacement key shipped three releases ago |
| Evidence checked | Deployment YAML, pod description, code search, startup logs, rollback release notes |
| Owner decision | Checkout approved key removal after one release window |
| First move | Remove code fallback and unused key in the same pull request |
| Watch signal | Config validation errors, crash loops, checkout tax API failures |
| Final action | Remove key from Helm values and live ConfigMap through GitOps sync |
| Prevention rule | New config keys require schema entry and owner in review |
This record is small enough for a pull request description. It is also more useful than a cleanup spreadsheet because it ties the decision to the exact workload and key.
FAQ
How do I know whether a ConfigMap is unused?
Check live workload references, source manifests, Helm or Kustomize templates, Jobs, CronJobs, pod descriptions, application code, and owner knowledge. Kubernetes does not give you a reliable per-key last-used timestamp, so use several references together.
Is it safe to delete a ConfigMap with no mounted pods?
Not automatically. It may be referenced by a CronJob, a suspended Job, a rollback release, an operator, or the next GitOps sync. Confirm live and source references before deletion.
Should I remove old keys or create a new ConfigMap?
Remove old keys when the current ConfigMap has a clear owner and schema. Create a new ConfigMap when ownership changed, the old object is shared by unrelated services, or a clean replacement makes rollback and review easier.
What should be in the cleanup pull request?
Include the ConfigMap name, namespace, keys removed, reference checks, owner approval, rollout and rollback plan, and watch signals. For broader Kubernetes cleanup planning, use the cleanup library to connect this work with namespace, PVC, and ingress reviews.