This guide documents the complete setup of OKD (The Community Distribution of Kubernetes that powers Red Hat OpenShift) as a Single Node Operator (SNO) cluster on a virtual machine hosted by Proxmox VE. This setup is ideal for home labs, development, and testing purposes.
✅ Prerequisites
- Proxmox VE Node: A working Proxmox VE host with hardware virtualization (VT-x/AMD-V) enabled in the BIOS/UEFI.
- Sufficient Resources:
- CPU: Minimum 8 vCPUs recommended for SNO.
- RAM: Minimum 32GB RAM recommended for SNO.
- Storage: Minimum 120GB-150GB fast storage (SSD/NVMe) for the OKD VM.
- Administrative Machine (Client): A Linux machine (e.g., Ubuntu, Fedora) or macOS to run
openshift-install
,oc
,podman
, and other client tools. This will be referred to as your “Admin Client Machine”. - Network: A local network with a DHCP server and DNS resolution capabilities (e.g., a router like OpenWRT). The OKD VM will require a static IP address.
- Red Hat Pull Secret: Obtain a pull secret from Red Hat OpenShift Cluster Manager (a free Red Hat developer account is sufficient). This is needed for some certified operators and images.
🔧 Phase 1: Preparation on Admin Client Machine
All commands in this phase are executed on your Admin Client Machine.
1. Set Environment Variables
Define the OKD version and architecture for consistency.
export OKD_VERSION=4.18.0-0.okd-scos.10 # Check for the latest stable SCOS release
export ARCH=x86_64
2. Download OpenShift Client (oc
)
curl -L "https://github.com/okd-project/okd/releases/download/${OKD_VERSION}/openshift-client-linux-${OKD_VERSION}.tar.gz" -o oc.tar.gz
tar zxf oc.tar.gz
chmod +x oc kubectl # kubectl is also included
sudo mv oc kubectl /usr/local/bin/ # Optional: Move to PATH for global access
oc version
3. Download OpenShift Installer (openshift-install
)
curl -L "https://github.com/okd-project/okd/releases/download/${OKD_VERSION}/openshift-install-linux-${OKD_VERSION}.tar.gz" -o openshift-install-linux.tar.gz
tar zxvf openshift-install-linux.tar.gz
chmod +x openshift-install
sudo mv openshift-install /usr/local/bin/ # Optional: Move to PATH
openshift-install version
4. Get Fedora CoreOS (FCOS) Live ISO
The installer will determine the correct FCOS ISO URL matching the OKD version.
export ISO_URL=$(openshift-install coreos print-stream-json | grep location | grep "${ARCH}" | grep iso | cut -d\" -f4)
echo "Downloading FCOS ISO from: ${ISO_URL}"
curl -L "${ISO_URL}" -o fcos-live.iso
Note: This fcos-live.iso
will be uploaded to Proxmox later.
5. Prepare install-config.yaml
Create a directory for your installation files and the install-config.yaml
file.
mkdir okd-sno-install
cd okd-sno-install
Create install-config.yaml
with the following content:
apiVersion: v1
baseDomain: okd.lan # Your local base domain
metadata:
name: okd4sno # Your cluster name
compute:
- name: worker
replicas: 0 # Essential for SNO
controlPlane:
name: master
replicas: 1 # Essential for SNO
networking:
networkType: OVNKubernetes
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 192.168.7.0/24 # Your Proxmox VM network subnet
serviceNetwork:
- 172.30.0.0/16
platform:
none: {} # For bare-metal/VM installations not managed by a cloud provider
bootstrapInPlace:
# IMPORTANT: Identify your Proxmox VM's target disk for installation.
# This can be /dev/vda, /dev/sda, or a more stable WWN path.
# Example for a VirtIO disk, often /dev/vda:
installationDisk: /dev/vda
# Example using WWN (more robust, get this from Proxmox VM's disk details or from FCOS live env):
# installationDisk: /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2
pullSecret: '<PASTE_YOUR_PULL_SECRET_JSON_HERE>' # Replace with your actual pull secret
sshKey: |
ssh-rsa AAAA...your_public_ssh_key_here # Replace with your public SSH key
Important Notes for install-config.yaml
:
baseDomain
andmetadata.name
: These will form your cluster’s FQDNs (e.g.,api.okd4sno.okd.lan
).machineNetwork.cidr
: Ensure this matches the subnet your OKD VM will reside in.installationDisk
:- For Proxmox VirtIO disks, this is typically
/dev/vda
. For SCSI, it might be/dev/sda
. - Using
/dev/disk/by-id/wwn-0x...
is more robust if disk names might change. You can identify the correct WWN or device path by booting the FCOS Live ISO on the target VM and using commands likelsblk
orls /dev/disk/by-id/
.
- For Proxmox VirtIO disks, this is typically
pullSecret
: Paste the entire JSON string from Red Hat.sshKey
: Your public SSH key to access thecore
user on the FCOS node.
6. Generate Single Node Ignition Configuration
This command uses the install-config.yaml
in the current directory.
# Still in okd-sno-install directory
openshift-install create single-node-ignition-config
This will create bootstrap-in-place-for-live-iso.ign
in the current directory.
7. Embed Ignition into FCOS Live ISO
We need coreos-installer
for this, which can be run via Podman.
# Ensure you are in the directory containing fcos-live.iso and the bootstrap ignition file
# (which should be okd-sno-install if you followed above)
# If fcos-live.iso is in parent dir:
# coreos-installer iso ignition embed -fi bootstrap-in-place-for-live-iso.ign ../fcos-live.iso
# If fcos-live.iso is in current dir (okd-sno-install):
# First, move fcos-live.iso into okd-sno-install or adjust paths. Let's assume it's in the parent.
# Ensure fcos-live.iso is in the directory where you run this command or provide full path.
# Let's assume fcos-live.iso is in the parent directory of okd-sno-install for this example.
# If you downloaded fcos-live.iso to the okd-sno-install directory:
podman run --privileged --pull always --rm \
-v /dev:/dev -v /run/udev:/run/udev -v "$PWD":"$PWD" -w "$PWD" \
quay.io/coreos/coreos-installer:release \
iso ignition embed -fi bootstrap-in-place-for-live-iso.ign fcos-live.iso
This modifies fcos-live.iso
in place to include the Ignition configuration.
⚙️ Phase 2: Proxmox VM Creation and FCOS Installation
1. Upload Modified ISO to Proxmox
Upload the modified fcos-live.iso
(the one with the embedded Ignition config) to your Proxmox VE “ISO Images” storage (e.g., local
storage).
2. Create the OKD Virtual Machine in Proxmox
- General:
- Name:
okd4sno-vm
(or similar) - Guest OS Type:
Linux
, Version:6.x - 2.6 Kernel
(or latest)
- Name:
- System:
- Machine:
q35
- BIOS:
OVMF (UEFI)
- EFI Storage: Select your Proxmox storage for the EFI disk.
- Enable Qemu Agent.
- Machine:
- Disks:
- Create a virtual hard disk (VirtIO Block or SCSI) with at least 120GB-150GB on fast storage. This is the disk you specified in
installationDisk
(e.g.,/dev/vda
).
- Create a virtual hard disk (VirtIO Block or SCSI) with at least 120GB-150GB on fast storage. This is the disk you specified in
- CPU:
- Cores: 8 (or more)
- Type:
host
(for best performance)
- Memory:
- 32768 MiB (32GB) or more. Disable “Ballooning Device”.
- Network:
- Model:
VirtIO (paravirtualized)
- Bridge: Your Proxmox bridge connected to your LAN (e.g.,
vmbr0
).
- Model:
- CD/DVD Drive:
- Select the modified
fcos-live.iso
you uploaded.
- Select the modified
- Boot Order:
- Set the CD/DVD drive (with the FCOS ISO) as the first boot device.
3. Install Fedora CoreOS
- Start the VM. It will boot from the modified FCOS Live ISO.
- Because the Ignition config is embedded and
bootstrapInPlace.installationDisk
is set, FCOS should automatically install itself to the specified disk (/dev/vda
) and then reboot. - Monitor the installation via the Proxmox VM console. You should see
coreos-installer
running. - Once it reboots for the first time after
coreos-installer
finishes, it will boot from the hard disk and start the OKD bootstrapping process. - Important: After the first successful boot from the hard disk, you can shut down the VM and remove the ISO from the CD/DVD drive or change the boot order to prioritize the hard disk.
🌐 Phase 3: DNS and Cluster Access
1. DNS Setup (Example: OpenWRT)
The OKD VM needs a static IP address. The install-config.yaml
does not set this; FCOS will initially use DHCP unless your Ignition config (not covered in this simplified version) sets a static IP.
It’s recommended to configure a DHCP reservation on your router for the MAC address of the OKD VM, or to configure a static IP on the FCOS node itself (more advanced, requires modifying the Ignition to write NetworkManager connection files).
For this guide, let’s assume the OKD VM gets the IP 192.168.7.126
(via DHCP reservation or manually configured static IP if you adapted the Ignition).
On your DNS server (e.g., OpenWRT router, Pi-hole):
Add static host entries pointing to the OKD VM’s IP (e.g., 192.168.7.126
):
api.okd4sno.okd.lan
→192.168.7.126
console-openshift-console.apps.okd4sno.okd.lan
→192.168.7.126
oauth-openshift.apps.okd4sno.okd.lan
→192.168.7.126
*.apps.okd4sno.okd.lan
→192.168.7.126
(If your DNS supports wildcard A records. If not, like OpenWRT UI, you’ll need to add entries for specific app routes as you create them, or usednsmasq
custom configs for wildcards).
Alternative for Admin Client Machine (if no central DNS):
Modify /etc/hosts
on your Admin Client Machine:
192.168.7.126 api.okd4sno.okd.lan console-openshift-console.apps.okd4sno.okd.lan oauth-openshift.apps.okd4sno.okd.lan
2. Monitor Installation from Admin Client Machine
Once the OKD VM has booted from its hard disk and the FCOS installation + Ignition processing is complete, the OKD bootstrap process will start.
# On your Admin Client Machine, in the okd-sno-install directory
openshift-install wait-for bootstrap-complete --log-level=info
# This can take 20-40 minutes.
# Once bootstrap is complete:
openshift-install wait-for install-complete --log-level=info
# This can take another 30-60+ minutes.
3. Web Console Access
After install-complete
finishes:
- Navigate to:
https://console-openshift-console.apps.okd4sno.okd.lan
- Login using:
- Username:
kubeadmin
- Password: Found in
okd-sno-install/auth/kubeadmin-password
on your Admin Client Machine.
- Username:
👤 Phase 4: Post-Installation - User and Storage (Admin Client Machine)
All oc
commands are run from your Admin Client Machine, targeting your new OKD cluster.
Ensure your KUBECONFIG
is set:
export KUBECONFIG="${PWD}/okd-sno-install/auth/kubeconfig"
oc whoami # Should show kube:admin
1. Create a Persistent Admin User
It’s not recommended to use kubeadmin
for daily operations.
Create an htpasswd
file for a new user (e.g., andrea
):
htpasswd -c -B -b users.htpasswd andrea YOUR_CHOSEN_STRONG_PASSWORD
oc create secret generic htpasswd-secret --from-file=htpasswd=users.htpasswd -n openshift-config
rm users.htpasswd # Clean up
Create oauth.yaml
with the following content:
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: local_htpasswd # Give it a unique name
mappingMethod: claim
type: HTPasswd
htpasswd:
fileData:
name: htpasswd-secret
Apply the OAuth configuration:
oc apply -f oauth.yaml
Grant cluster-admin rights to your new user:
oc adm policy add-cluster-role-to-user cluster-admin andrea
You can now log out from kubeadmin
and log in as andrea
(selecting “local_htpasswd” provider).
2. Configure HostPath StorageClass for Persistent Storage (Simple Lab Setup)
This creates a non-dynamic hostPath
provisioner, suitable for SNO labs.
On the OKD VM Node (via SSH as core
user):
Create the directory that will be used by the PersistentVolume.
# Connect to OKD VM: ssh [email protected]
sudo mkdir -p /mnt/data/pv01
sudo chmod 777 /mnt/data/pv01 # Or more restrictive permissions depending on use case
exit
On your Admin Client Machine:
Create pv-hostpath.yaml
:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-hostpath-10gi # Unique name for the PV
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce # Adjust as needed (RWO, ROX, RWX - RWO is typical for hostPath)
hostPath:
path: "/mnt/data/pv01" # Path created on the OKD VM node
persistentVolumeReclaimPolicy: Retain # Or Delete/Recycle
storageClassName: hostpath-sc # Name of the StorageClass this PV belongs to
Create sc-hostpath.yaml
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: hostpath-sc # Must match storageClassName in PV
provisioner: kubernetes.io/no-provisioner # Indicates manual PV provisioning
volumeBindingMode: WaitForFirstConsumer
# For SNO, consider making this the default StorageClass:
# metadata:
# name: hostpath-sc
# annotations:
# storageclass.kubernetes.io/is-default-class: "true"
Apply the configurations:
oc apply -f pv-hostpath.yaml
oc apply -f sc-hostpath.yaml
🐳 Phase 5: Setup Local Image Registry (Optional)
This sets up a Podman-based Docker/OCI registry chạy directly on the OKD VM node.
1. Run Registry Container on OKD VM Node
On the OKD VM Node (via SSH as core
user):
sudo mkdir -p /opt/registry/data
# This podman command runs the registry on the host network, making it accessible on port 5000
podman run -d --name registry --restart=always \
--network host \
-v /opt/registry/data:/var/lib/registry:z \
-e REGISTRY_HTTP_ADDR=0.0.0.0:5000 \
-e REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry \
docker.io/library/registry:2 # Using official image from Docker Hub
2. Configure OKD to Trust the Insecure Local Registry
On your Admin Client Machine:
oc patch image.config.openshift.io/cluster --type=merge \
-p '{"spec":{"registrySources":{"insecureRegistries":["192.168.7.126:5000"]}}}'
Note: This will trigger an update of the machine config on the node, which may take a few minutes. Monitor oc get mcp
(Machine Config Pool).
3. Configure Podman/Docker on Client Machines (Optional, for pushing)
If you want to push images to this registry from your Admin Client Machine or other development machines, you need to configure their local container tools to trust this insecure registry.
Create or edit /etc/containers/registries.conf
(for Podman) or /etc/docker/daemon.json
(for Docker) on those client machines.
For Podman on your Admin Client Machine:
# Edit /etc/containers/registries.conf (needs sudo)
# Add or modify to include:
# [[registry]]
# location = "192.168.7.126:5000"
# insecure = true
Restart Podman/Docker service if necessary.
Test access to the registry from your Admin Client Machine:
curl http://192.168.7.126:5000/v2/_catalog
# Expected output: {"repositories":[]} (if empty)
🚀 Phase 6: Deploy a Custom Go Application (Example)
This demonstrates building a Go application, pushing it to the local registry, and deploying it on OKD.
1. Application Code and Dockerfile
On your Admin Client Machine (or development machine):
Create a simple Go main.go
:
package main
import (
"fmt"
"log"
"net/http"
"os"
)
func handler(w http.ResponseWriter, r *http.Request) {
hostname, _ := os.Hostname()
fmt.Fprintf(w, "Hello from Go! Running on host: %s\n", hostname)
}
func main() {
http.HandleFunc("/", handler)
log.Println("Go server starting on port 8080...")
log.Fatal(http.ListenAndServe(":8080", nil))
}
Create go.mod
(in the same directory):
module go-example-rest
go 1.21 # Or your Go version
Create a Dockerfile
(multi-stage):
# Builder Stage
FROM golang:1.22-alpine AS builder # Use a recent Go version
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
# RUN apk add --no-cache git # Only if your go get needs git
ENV CGO_ENABLED=0
ENV GOOS=linux
ENV GOARCH=amd64
RUN go build -v -o server main.go
# Final Stage
FROM alpine:latest
RUN apk --no-cache add ca-certificates && adduser -D -u 1001 appuser
WORKDIR /home/appuser/
COPY --from=builder /app/server ./server
# Ensure server is executable, though `go build` usually handles this for Linux binaries
# RUN chmod +x ./server
USER appuser
EXPOSE 8080
ENTRYPOINT ["./server"]
2. Build and Push Image to Local Registry
On your Admin Client Machine (or wherever you have Podman/Docker and the source code):
podman build -t 192.168.7.126:5000/go-example-rest:latest .
podman push --tls-verify=false 192.168.7.126:5000/go-example-rest:latest
3. Deploy Application on OKD
On your Admin Client Machine:
oc new-project go-example
# Import the image into OKD's internal registry from your local registry
oc import-image go-example-rest --from="192.168.7.126:5000/go-example-rest:latest" --confirm -n go-example
# Deploy the application using the imported image stream
oc new-app --image-stream=go-example-rest:latest -n go-example
# Or more explicitly: oc new-app go-example/go-example-rest:latest
# Expose the service to create a route
oc expose service/go-example-rest -n go-example
4. Access Your Application
Get the route URL:
oc get route go-example-rest -n go-example -o jsonpath='{.spec.host}'
# Example output: go-example-rest-go-example.apps.okd4sno.okd.lan
Open http://<route_url>
in your browser. You should see “Hello from Go!…”