The vismo tool is really helpful when you want to know where your live Kubernetes resources come from.
You do not have to spend a lot of time trying to figure out what is going on.
Instead you can use that time to fix things.
-
Vismo fetches any resource in the way that kubectl does. You can use the names and the same slash syntax. You can also use the -n flag.
-
Vismo detects who manages each resource. This could be Helm, ArgoCD, Kustomize, Flux, plain kubectl, Crossplane, Argo Rollouts, Karpenter, Cluster API, or OLM. Sometimes it is a mix of these.
-
Vismo traces field ownership. For example if you want to know who last changed the spec.replicas field vismo can tell you. It does this by looking at the Kubernetes managedFields.
-
Vismo can blame an entire resource at once. Instead of asking about one field at a time you can get a full picture - every field, every owner, all in one go. Think of it like
git blamebut for your live Kubernetes objects. -
When managedFields is empty vismo does not just give up. It falls back to reading the ownerReferences chain so you still get some picture of where the resource came from.
Download the latest binary for your platform and put it somewhere in your PATH.
curl -L https://github.com/Veinar/vismo/releases/download/v1.1.0/vismo-darwin-arm64 -o vismo
chmod +x vismo
sudo mv vismo /usr/local/bin/vismo
vismo versioncurl -L https://github.com/Veinar/vismo/releases/download/v1.1.0/vismo-windows-amd64.exe -o vismo.exe
# Run directly
.\vismo.exe version
# or Add it to PATH
vismo versiongit clone https://github.com/Veinar/vismo
cd vismo
make build # this creates the bin/vismo fileTo use vismo you need to have Go 1.25 or later installed. You also need to be able to reach a Kubernetes cluster. You can do this with a kubeconfig file or by running vismo inside the cluster.
# Fetch a deployment and see who manages it
vismo get deploy/api -n production
# You can also use the form, which works like kubectl
vismo get deployment/api -n production
# If you want to know who last set the spec.replicas field and when
vismo get deploy/api -n production --field spec.replicas
# Get the same information as a JSON object (pipeable to jq)
vismo get deploy/api -n production --field spec.replicas -o json
# List all deployments in a namespace
vismo get deployments -n production
# Print the version of vismo
vismo version
# See who owns every single field in a deployment - all at once
vismo blame deploy/api -n production
# Same thing but rendered as an annotated tree with inline comments
vismo blame deploy/api -n production -o annotated
# Blame output as JSON
vismo blame deploy/api -n production -o json
# Only show fields under spec.template (you can repeat --field as many times as you like)
vismo blame deploy/api -n production --field spec.template --field spec.replicas
# Only show fields owned by helm
vismo blame deploy/api -n production --manager helm
# Skip the header lines - handy when piping output to another tool
vismo blame deploy/api -n production -q
# See what changed compared to the last kubectl apply
vismo blame deploy/api -n production --diff-applied| Flag | Short | Default | Description |
|---|---|---|---|
--namespace |
-n |
- | The Kubernetes namespace to use |
--all-namespaces |
-A |
false |
Whether to query across all namespaces |
--timeout |
- | 30s |
Timeout for network requests (e.g. 10s, 1m) |
--field |
- | - | On get: the field path to trace ownership, like spec.replicas. On blame: filter output to fields matching this prefix - repeat the flag to add more prefixes |
--output |
-o |
text |
Output format for get: text for human-readable output, json for a machine-readable JSON object |
--output |
-o |
table |
Output format for blame: table for a clean aligned table, annotated for a YAML-like tree with inline comments, json for a machine-readable JSON object |
--quiet |
-q |
false |
Suppress the section headers and blank separator lines |
--manager |
- | - | Filter blame output to fields owned by this specific manager (e.g. --manager helm) |
--diff-applied |
- | false |
Compare the live resource against the last kubectl apply and show what has drifted |
--kubeconfig |
- | ~/.kube/config |
The path to your kubeconfig file |
--log-level |
- | warn |
How much log output you want: debug, info, warn, or error |
vismo get deploy/api -n production --field spec.replicas--- Resource YAML ---
apiVersion: apps/v1
kind: Deployment
...
--- Detected Sources ---
1. Type: Helm
Name: api
Version: api-2.1.0
chart: api-2.1.0
release-name: api
--- Field Ownership Trace ---
Field: spec.replicas
Final Value: 5
Ownership Trace (based on managedFields):
1. Manager: 'kubectl-scale'
Operation: Update
Timestamp: 2023-10-27T10:00:05Z
APIVersion: apps/v1
(This is the final owner of the value)
2. Manager: 'helm'
Operation: Update
Timestamp: 2023-10-27T09:30:12Z
APIVersion: apps/v1
Here is what happened: Helm originally set the replicas field, but then kubectl scale ran later and overwrote it to 5. That is the final value, and kubectl-scale is the final owner. vismo read all of this from managedFields so you did not have to.
If you want to pipe the result into jq or process it in a script, add -o json:
vismo get deploy/api -n production --field spec.replicas -o json{
"resource": {
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"name": "api",
"namespace": "production"
}
},
"sources": [
{
"type": "Helm",
"name": "api",
"details": {
"chart": "api-2.1.0"
}
}
],
"fieldTrace": {
"fieldPath": "spec.replicas",
"finalValue": 5,
"owners": [
{
"manager": "kubectl-scale",
"operation": "Update",
"time": "2023-10-27T10:00:05Z",
"apiVersion": "apps/v1",
"hasTimestamp": true
},
{
"manager": "helm",
"operation": "Update",
"time": "2023-10-27T09:30:12Z",
"apiVersion": "apps/v1",
"hasTimestamp": true
}
]
}
}Sometimes you do not know which field to ask about. You just know something is wrong and you want to understand who touched what. That is exactly what blame is for.
vismo blame deploy/api -n production--- Field Blame ---
Age: 62d
FIELD MANAGER OPERATION TIMESTAMP
metadata.annotations.kubectl.kubernetes.io/... kubectl-client Update 2024-01-10T08:00:00Z
spec.replicas kubectl-scale Update 2024-01-15T14:22:05Z
spec.selector.matchLabels.app helm Apply 2024-01-08T10:00:00Z
spec.template.metadata.labels.app helm Apply 2024-01-08T10:00:00Z
spec.template.spec.containers[name=api].image helm Apply 2024-01-08T10:00:00Z
Every field in one table. The manager shown is always the one who wrote it most recently, so if two tools have touched the same field you see who won. The resource age is shown once at the top so you have some context without it cluttering every row.
Sometimes, two different controllers or tools will fight over the same field, repeatedly overwriting each other's changes. vismo can spot this happening. When it detects that multiple managers have claimed the same field, it highlights it directly in the blame output:
--- Field Blame ---
Age: 62d
FIELD MANAGER OPERATION TIMESTAMP
metadata.annotations.kubectl.kubernetes.io/... kubectl-client Update 2024-01-10T08:00:00Z
spec.replicas [CONFLICT] kubectl-scale Update 2024-01-15T14:22:05Z
...also claimed by helm (Apply)
spec.selector.matchLabels.app helm Apply 2024-01-08T10:00:00Z
spec.template.metadata.labels.app helm Apply 2024-01-08T10:00:00Z
spec.template.spec.containers[name=api].image helm Apply 2024-01-08T10:00:00Z
Here, kubectl-scale is the most recent owner of spec.replicas, but helm has also claimed it. vismo flags this as a conflict, marking the winning manager with [CONFLICT] (usually in red) and listing the other contenders below (usually in yellow). This makes it easy to spot and resolve these common sources of configuration drift.
If you only care about a specific part of the resource, pass --field with a path prefix. You can repeat it as many times as you need:
vismo blame deploy/api -n production --field spec.replicas --field spec.templateThis is useful when a resource has dozens of fields and you are only interested in a particular section.
If you prefer to see the actual values alongside the owners, use -o annotated:
vismo blame deploy/api -n production -o annotated--- Field Blame (annotated) ---
spec:
replicas: 5 # kubectl-scale • Update • 2024-01-15T14:22:05Z
selector:
matchLabels:
app: api # helm • Apply • 2024-01-08T10:00:00Z
template:
metadata:
labels:
app: api # helm • Apply • 2024-01-08T10:00:00Z
spec:
containers:
- image: api:v2 # helm • Apply • 2024-01-08T10:00:00Z
This view is not valid YAML - it is a human-readable tree meant for eyeballing. Fields without a known owner are shown without any comment. It is a fast way to scan a resource and immediately spot anything that does not look right.
Some older clusters or resources created with certain flags do not have managedFields at all. When that happens vismo tells you clearly and also looks at the ownerReferences chain, so you still get some useful context about where the resource came from:
--- Owner Chain (fallback) ---
Pod/api-75f9b-xk2pq
└── ReplicaSet/api-75f9b (apps/v1)
UID: abc-123-def
If your resource was originally created with kubectl apply, you can ask vismo to diff the live state against the snapshot stored in the last-applied-configuration annotation:
vismo blame deploy/api -n production --diff-applied--- Diff vs Last Applied ---
Changed:
~ spec.replicas: 1 → 5
Added (in live, not in last-applied):
+ metadata.annotations.some-tool/injected
This makes it easy to spot fields that have drifted since the last apply - whether because someone ran kubectl scale, a controller mutated something, or a webhook injected a field.
If the resource is managed with server-side apply or a GitOps tool like Flux or ArgoCD the annotation will not be there, and vismo will tell you that rather than showing an empty diff.
vismo currently recognises these tools automatically based on labels and annotations:
| Tool | What it looks for |
|---|---|
| Helm | app.kubernetes.io/managed-by: Helm label |
| ArgoCD | argocd.argoproj.io/app-name label or tracking annotation |
| Kustomize | app.kubernetes.io/managed-by: kustomize or Flux Kustomize annotations |
| Flux | kustomize.toolkit.fluxcd.io/name or helm.toolkit.fluxcd.io/name labels |
| kubectl | kubectl.kubernetes.io/last-applied-configuration annotation |
| Crossplane | Any crossplane.io/ label or crossplane.io/composition-resource-name annotation |
| Argo Rollouts | app.kubernetes.io/managed-by: argo-rollouts label or a Rollout ownerReference |
| Karpenter | Any karpenter.sh/ label |
| Cluster API | Any cluster.x-k8s.io/ label |
| OLM | Any operators.coreos.com/ label or olm.operatorgroup annotation |
A single resource can match more than one tool and vismo will list all of them.
If you get stuck you can run vismo with --log-level=debug to see every step it takes. This includes which API calls it makes, how the GVR is resolved, and which managedFields entries are scanned.
vismo get deploy/api -n production --log-level=debugVismo works with all the short names. These include deploy, sts, svc, po, cm, secret, ing, ds, rs, cj, job, pvc, pv, ns, sa and no. Also accepting plurals.
