⌂ Home

Headlamp UI on a 3-Node Kubernetes VM Cluster

Headlamp provides a current web UI path for Kubernetes clusters. This document covers the official in-cluster installation path from the archived Dashboard background through Helm install, service exposure, token-based access, UI validation, and the next production-ready access model.

Why Headlamp now

  • The archived Kubernetes Dashboard repository says the project is no longer maintained and asks users to consider Headlamp instead.
  • Headlamp has active official documentation for in-cluster deployment, plugins, ingress, OIDC, and desktop use.
  • That makes it the better UI direction for new Kubernetes documentation and cluster setup guidance.

Scope

  • Helm-based in-cluster installation into `kube-system`
  • Port-forward access for VM-based labs
  • ServiceAccount token login for controlled lab access
  • UI testing from cluster overview through workloads and logs
Cluster Shape
3 Nodes
One control plane VM and two worker VMs provide the lab target for the in-cluster install.
Install Method
Helm
The official Headlamp docs recommend Helm as the easiest in-cluster installation path.
Namespace
kube-system
The example install is done in `kube-system`, keeping the UI close to core cluster services.
Access Path
8080
For the lab, `kubectl port-forward -n kube-system service/headlamp 8080:80` is the fastest route to the browser.

Why Dashboard Was Retired and Why Headlamp Fits Better

The Kubernetes Dashboard project was archived because it no longer had enough active maintainers and contributors. For a cluster-access UI, that directly affects confidence in maintenance, updates, and long-term direction.

Dashboard

What changed upstream

The project lost active maintenance momentum and was archived, which makes it a poor default choice for new Kubernetes UI standards.

Headlamp

What improves with Headlamp

Headlamp provides a more current path with active documentation, plugin extensibility, in-cluster and desktop options, and RBAC-aware access patterns.

Installation Flow Architecture

Visual diagram showing the complete Helm installation and access flow.

Admin Workstation kubectl + helm helm repo add Helm Repository Headlamp Chart helm install Kubernetes Cluster (kube-system) API Server Port 6443 Headlamp Deployment Pod 1 Running Pod 2 Running Service: headlamp Authentication ServiceAccount: headlamp-admin ClusterRoleBinding → cluster-admin Reads K8s API kubectl port-forward service/headlamp 8080:80 http://localhost:8080

Reference Architecture Stack

The install path is simple, but it still helps to show the layers clearly so the relationship between Headlamp, Kubernetes authentication, and RBAC is easy to follow.

1. Admin Workstation

The operator uses `kubectl` and `helm` from the control plane VM or a separate admin machine.

  • kubectl configured
  • helm available
  • cluster-admin for setup
>

2. Helm Repository

The Headlamp chart is pulled from the official chart repository.

  • `helm repo add headlamp`
  • `helm repo update`
  • versioned chart install
>

3. In-Cluster Workload

Headlamp runs as a Kubernetes Deployment with a ClusterIP Service.

  • Deployment
  • Pod
  • Service `headlamp`
>

4. Access Authentication

A ServiceAccount token is used so the browser can log in with a Kubernetes identity.

  • ServiceAccount
  • ClusterRoleBinding
  • short-lived token
>

5. Browser Session

Users reach Headlamp through port-forward and validate what the UI can see.

  • localhost access
  • cluster overview
  • workload inspection

Installation Journey

Choose a view to focus on preparation, installation, access, validation, or the production follow-up story.

Confirm the 3-node lab cluster is ready

Prepare
kubectl get nodes
>
3 nodes Ready
>
kubectl cluster-info
>
helm version

Everything starts from a cluster-admin workstation that can already talk to the VM-based Kubernetes control plane.

kubectl get nodes -o wide
kubectl cluster-info
helm version

Add the Headlamp repo and install the chart

Install
helm repo add headlamp
>
helm repo update
>
helm install my-headlamp
>
Deployment + Service in kube-system

This follows the official in-cluster installation path from the Headlamp docs.

helm repo add headlamp https://kubernetes-sigs.github.io/headlamp/
helm repo update
helm install my-headlamp headlamp/headlamp --namespace kube-system
helm status my-headlamp -n kube-system

Create a lab login identity and expose the service

Access
ServiceAccount headlamp-admin
>
ClusterRoleBinding
>
kubectl create token
>
kubectl port-forward service/headlamp

This is the simplest way to reach the UI first, without adding ingress and OIDC.

kubectl apply -f headlamp-admin.yaml
kubectl create token headlamp-admin -n kube-system
kubectl port-forward -n kube-system service/headlamp 8080:80

Validate what the UI can actually see

Validation
Open localhost:8080
>
Log in with token
>
Open nodes and namespaces
>
Inspect workloads and logs

The goal is not just to open the page. It is to prove that Headlamp can read the cluster resources the lab expects.

Check in the UI:
- nodes
- namespaces
- Pods in kube-system
- Deployments
- Services
- YAML or details view
- events or logs

Move from lab mode to shared team mode

Production Next
Ingress
>
TLS
>
OIDC
>
namespace-scoped RBAC

Port-forward plus admin token is excellent for a lab. For shared use, the cleaner model is ingress and enterprise authentication.

Recommended next stage:
- expose Headlamp with ingress
- enable TLS
- integrate OIDC
- avoid broad long-lived admin tokens
- use namespace-scoped RBAC for teams

Lab Build Sequence

This order keeps the setup practical for a 3-node VM cluster and mirrors the official installation flow while covering access and testing clearly.

1. Check nodes and admin tools

Verify the control plane and two workers are healthy, and confirm `kubectl` and `helm` are available before starting the install.

2. Add the Headlamp Helm repo

Register the official chart source and refresh local metadata so the install can pull the current chart cleanly.

3. Install Headlamp into kube-system

Deploy the chart using the official in-cluster example and confirm the release is visible in Helm.

4. Verify Deployment, Pod, and Service

Check that Headlamp is actually running and the `headlamp` Service exists in `kube-system`.

5. Create the lab access identity

Apply a `headlamp-admin` ServiceAccount and bind it so the browser has a Kubernetes identity to log in with.

6. Port-forward and test the UI

Open the page locally, log in, and confirm that nodes, namespaces, workloads, and resource details are visible.

Commands and Manifests

These are the core building blocks for the install, access, and verification path.

Official Helm install

This stays close to the Headlamp docs and is the recommended starting point.

helm repo add headlamp https://kubernetes-sigs.github.io/headlamp/
helm repo update
helm install my-headlamp headlamp/headlamp --namespace kube-system

ServiceAccount manifest

This creates a browser-login identity for Headlamp access.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: headlamp-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: headlamp-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: headlamp-admin
  namespace: kube-system

Access and token commands

These are the minimum commands needed to get the browser session working on a VM-based setup.

kubectl apply -f headlamp-admin.yaml
kubectl create token headlamp-admin -n kube-system
kubectl port-forward -n kube-system service/headlamp 8080:80

Verification commands

Use these when the UI does not load or when resources look incomplete.

helm status my-headlamp -n kube-system
kubectl get deploy,po,svc -n kube-system | grep headlamp
kubectl describe deployment headlamp -n kube-system
kubectl logs deployment/headlamp -n kube-system

Important lab note

Using `cluster-admin` through a ServiceAccount token is acceptable for a controlled setup, but it is not the right long-term access pattern for shared environments. The cleaner production direction is ingress plus OIDC plus narrower RBAC.

Testing and Expected Outcomes

If these checks pass, the Headlamp in-cluster deployment is working correctly for the lab.

Install

Release is healthy

The Helm release exists and the Headlamp Deployment is available in `kube-system`.

Access

Port-forward works

`http://localhost:8080` opens while `kubectl port-forward -n kube-system service/headlamp 8080:80` is running.

Visibility

Cluster resources load

The UI can show nodes, namespaces, Pods, Deployments, Services, and resource details without empty or error states.

Direction

Production next step is clear

The temporary access model is understood, and ingress plus OIDC is identified as the better production architecture.