Confirm the 3-node lab cluster is ready
PrepareEverything starts from a cluster-admin workstation that can already talk to the VM-based Kubernetes control plane.
kubectl get nodes -o wide kubectl cluster-info helm version
Headlamp provides a current web UI path for Kubernetes clusters. This document covers the official in-cluster installation path from the archived Dashboard background through Helm install, service exposure, token-based access, UI validation, and the next production-ready access model.
The Kubernetes Dashboard project was archived because it no longer had enough active maintainers and contributors. For a cluster-access UI, that directly affects confidence in maintenance, updates, and long-term direction.
The project lost active maintenance momentum and was archived, which makes it a poor default choice for new Kubernetes UI standards.
Headlamp provides a more current path with active documentation, plugin extensibility, in-cluster and desktop options, and RBAC-aware access patterns.
Visual diagram showing the complete Helm installation and access flow.
The install path is simple, but it still helps to show the layers clearly so the relationship between Headlamp, Kubernetes authentication, and RBAC is easy to follow.
The operator uses `kubectl` and `helm` from the control plane VM or a separate admin machine.
The Headlamp chart is pulled from the official chart repository.
Headlamp runs as a Kubernetes Deployment with a ClusterIP Service.
A ServiceAccount token is used so the browser can log in with a Kubernetes identity.
Users reach Headlamp through port-forward and validate what the UI can see.
Choose a view to focus on preparation, installation, access, validation, or the production follow-up story.
Everything starts from a cluster-admin workstation that can already talk to the VM-based Kubernetes control plane.
kubectl get nodes -o wide kubectl cluster-info helm version
This follows the official in-cluster installation path from the Headlamp docs.
helm repo add headlamp https://kubernetes-sigs.github.io/headlamp/ helm repo update helm install my-headlamp headlamp/headlamp --namespace kube-system helm status my-headlamp -n kube-system
This is the simplest way to reach the UI first, without adding ingress and OIDC.
kubectl apply -f headlamp-admin.yaml kubectl create token headlamp-admin -n kube-system kubectl port-forward -n kube-system service/headlamp 8080:80
The goal is not just to open the page. It is to prove that Headlamp can read the cluster resources the lab expects.
Check in the UI: - nodes - namespaces - Pods in kube-system - Deployments - Services - YAML or details view - events or logs
Port-forward plus admin token is excellent for a lab. For shared use, the cleaner model is ingress and enterprise authentication.
Recommended next stage: - expose Headlamp with ingress - enable TLS - integrate OIDC - avoid broad long-lived admin tokens - use namespace-scoped RBAC for teams
This order keeps the setup practical for a 3-node VM cluster and mirrors the official installation flow while covering access and testing clearly.
Verify the control plane and two workers are healthy, and confirm `kubectl` and `helm` are available before starting the install.
Register the official chart source and refresh local metadata so the install can pull the current chart cleanly.
Deploy the chart using the official in-cluster example and confirm the release is visible in Helm.
Check that Headlamp is actually running and the `headlamp` Service exists in `kube-system`.
Apply a `headlamp-admin` ServiceAccount and bind it so the browser has a Kubernetes identity to log in with.
Open the page locally, log in, and confirm that nodes, namespaces, workloads, and resource details are visible.
These are the core building blocks for the install, access, and verification path.
This stays close to the Headlamp docs and is the recommended starting point.
helm repo add headlamp https://kubernetes-sigs.github.io/headlamp/ helm repo update helm install my-headlamp headlamp/headlamp --namespace kube-system
This creates a browser-login identity for Headlamp access.
apiVersion: v1 kind: ServiceAccount metadata: name: headlamp-admin namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: headlamp-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: headlamp-admin namespace: kube-system
These are the minimum commands needed to get the browser session working on a VM-based setup.
kubectl apply -f headlamp-admin.yaml kubectl create token headlamp-admin -n kube-system kubectl port-forward -n kube-system service/headlamp 8080:80
Use these when the UI does not load or when resources look incomplete.
helm status my-headlamp -n kube-system kubectl get deploy,po,svc -n kube-system | grep headlamp kubectl describe deployment headlamp -n kube-system kubectl logs deployment/headlamp -n kube-system
Using `cluster-admin` through a ServiceAccount token is acceptable for a controlled setup, but it is not the right long-term access pattern for shared environments. The cleaner production direction is ingress plus OIDC plus narrower RBAC.
If these checks pass, the Headlamp in-cluster deployment is working correctly for the lab.
The Helm release exists and the Headlamp Deployment is available in `kube-system`.
`http://localhost:8080` opens while `kubectl port-forward -n kube-system service/headlamp 8080:80` is running.
The UI can show nodes, namespaces, Pods, Deployments, Services, and resource details without empty or error states.
The temporary access model is understood, and ingress plus OIDC is identified as the better production architecture.