In “Deploy Your Own Kubernetes Lab”
I covered multiple deployment options for a Kubernetes lab,
ranging from more lightweight (like running Kubernetes locally)
to more realistic ones (like deploying a multi-node cluster) suitable for security research.
In this blog post, I’m going to detail the steps I took to deploy my own
Kubernetes Lab on baremetal, and on an Intel NUC in particular.
I was looking for a self-contained option,
which - most importantly - didn’t take up much space,
so I ended up settling
on an Intel NUC,
starting with 250GB of storage and 32GB of RAM.
It might be worth noting that, for the initial setup phase, it is also useful to have
a small keyboard (like this one)
and a monitor (a 7inch one is just fine)
At a high level, my home network diagram looks like the one below:
As the title of this post implies, the aim was to have a Kubernetes cluster
running directly on baremetal, hence deciding which operating system to rely on
was almost straightforward:
Fedora CoreOS (FCOS) is a minimal operating systemspecifically designed for running containerized workloads securely and at scale.
Let’s see how to get it running on the Intel NUC.
Prepare a Bootable USB
First step in the installation process involves burning a Fedora CoreOS ISOonto a bootable USB stick.
The latest stable version of the ISO for baremetal installations can be found
directly on the Fedora website
(33.20210301.3.1 at the time of writing).
From there, it is simply a matter of burning the ISO, which, on macOS, can be
done using tools like Etcher. Once launched, select the
CoreOS ISO and the USB device to use, and Etcher will take care of creating
a bootable USB from it.
Prepare an Ignition Config
For those new to FCOS (me included before creating this lab), it might be worth
explaining what an Ignition file actually is.
An Ignition file specifies the configuration for provisioning FCOS instances:
the process begins with a YAML configuration file, which gets
translated by the FCOS Configuration Transpiler (fcct) into a machine-friendly JSON,
which is the final configuration file for Ignition.
FCOS ingests the Ignition file only on first boot,
applying the whole configuration or failing to boot in case of errors.
The Fedora documentation
proved to be excellent in detailing how to create a
basic Ignition file that modifies the default FCOS user (named core)
to allow logins with an SSH key.
First, on your workstation create a file (named config.fcc) with the following content,
and make sure to replace the line starting with ssh-rsa with the contents of your SSH public key file:
In the config above, we are basically telling FCOS to add the default user
named core to three additional groups (docker, wheel, and sudo),
as well as to allow key based authentication with the the public SSH key specified
in the ssh_authorized_keys section.
The public key will be provisioned to FCOS machine via Ignition,
whereas the private counterpart needs to be available to your user on your local workstation,
in order to remotely authenticate over SSH.
Next, we need to use fcct, the Fedora CoreOS Config Transpiler,
to produces a JSON Ignition file from a YAML FCC file.
An easy way to use fcct is to run it in a container:
Since this config.ign will be needed to boot FCOS,
we need to make it temporarily available for devices on the local network.
There are multiple ways to accomplish this: I did opt to quickly spin up
updog (a replacement for Python’s SimpleHTTPServer):
Install from Live USB
With the Ignition config ready,
plug the USB stick in the Intel NUC, turn it on,
and make sure to select that media as preferred boot option.
If the ISO has been burnt correctly, you should end up in a shell as
the core user.
The actual installation can be accomplished in a quite straightforward way
The command above instructs coreos-installer to use the Ignition config
we are making available to local network from our workstation (192.168.1.150 in my case).
The --insecure-ignition flag is needed if the Ignition file
is served over plaintext HTTP rather than TLS.
After a reboot of the Intel NUC, you should be able to SSH into it from your
And that’s it! FCOS is now up and running.
Next step is installing Kubernetes on it.
The installation process for Kubernetes is a bit more lenghty,
and can be broken up in a few sections:
installation of dependencies, installation of the cluster, and network setup.
While looking around (i.e., Googling) for the most effective way to deploy
a vanilla Kubernets on FCOS I came across a really detailed article from
Matthias Preu (Fedora CoreOS - Basic Kubernetes Setup) describing exactly this process.
Note that the remainder of this sub-section has been based heavily on Matthias’ setup,
and you should refer to his blog post for a detailed explanation of each installation step.
Unlike clusters running in the cloud, where network load balancers are available on-demand
and can be configured simply via Kubernetes manifests,
baremetal clusters require a slightly different setup to offer the same kind of access
to external clients.
Install NGINX Controller
First of all, let’s deploy the NGINX Ingress Controller:
After a few seconds, we should be able to see that the Ingress Controller
pods have started in the ingress-nginx namespace:
As per NGINX’s documentation,
MetalLB provides a network load-balancer implementation
for Kubernetes clusters that do not run on a supported cloud provider,
effectively allowing the usage of LoadBalancer Services within any cluster.
This will deploy MetalLB to the cluster, under the metallb-system namespace.
The main components are:
metallb-system/controller (deployment): the cluster-wide controller that handles IP address assignments.
metallb-system/speaker (daemonset): the component that speaks the protocol(s) to make the services reachable.
memberlist (secret): which contains the secretkey to encrypt the communication between speakers for the fast dead node detection.
Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.
After a few seconds, we can verify the status of the installation:
Altough running, MetalLB’s components will remain idle until they will get provided
with a configmap.
In this regard, MetalLB requires a dedicated pool of IP addresses in order
to be able to take ownership of the ingress-nginx Service.
Bear in mind that this pool of IPs must be dedicated to MetalLB’s use,
the Kubernetes node IPs or IPs handed out by a DHCP server cannot be reused for this purpose.
After creating such ConfigMap
(for my setup I chose 192.168.1.160-192.168.1.190 as reserved addresses),
MetalLB will take ownership
of the IP addresses in the pool and will update the External IP field of
each Service of type LoadBalancer.
Finally, the last component we need is the
HAProxy Ingress Controller,
which can be used to route traffic from outside the cluster to services within the cluster.
As per documentation,
we first need to add the HAProxy Ingress’ Helm repository:
Next, we need to create a haproxy-ingress-values.yaml file with custom parameters
and use it during the installation with Helm:
To verify the successful installation of HAProxy:
As it can be seen in the output above,
MetalLB updated the External IP of the haproxy-ingress Service
(which is of type LoadBalancer), and assigned it to one of the reserved IP
addresses (192.168.1.160 in this case).
If you followed along, you should have the following pods
currently running in your cluster:
➜ cat sample-deployment.yaml---apiVersion:v1kind:Namespacemetadata:name:test---apiVersion:networking.k8s.io/v1beta1kind:Ingressmetadata:name:bookinfo-ingressannotations:kubernetes.io/ingress.class:haproxyspec:rules:-host:product.192.168.1.151.nip.io# IP of the NUChttp:paths:-path:/backend:serviceName:productpageservicePort:9080---apiVersion:v1kind:Servicemetadata:name:productpagenamespace:testlabels:app:productpageservice:productpagespec:type:LoadBalancerports:-name:httpport:80targetPort:9080selector:app:productpage---apiVersion:v1kind:ServiceAccountmetadata:name:bookinfo-productpagenamespace:test---apiVersion:apps/v1kind:Deploymentmetadata:name:productpage-v1namespace:testlabels:app:productpageversion:v1spec:replicas:1selector:matchLabels:app:productpageversion:v1template:metadata:labels:app:productpageversion:v1spec:serviceAccountName:bookinfo-productpagecontainers:-name:productpageimage:docker.io/istio/examples-bookinfo-productpage-v1:1.15.0imagePullPolicy:IfNotPresentports:-containerPort:9080
Note how in line 16 we had to specify the IP address of the Intel NUC
as part of the Ingress’ host.
Let’s apply this manifest:
We can see how http://product.192.168.1.151.nip.io is getting exposed via the
bookinfo-ingress and will be reachable from clients within the local network:
Volumes and Stateful Deployments
The last thing I wanted to try was the cluster’s compatibility with volumes
and stateful deployments.
Luckily, it turned out that the standard setup worked out of the box:
Lines 2-16: create a hostPath PersistentVolume which uses a directory on the Node (the Intel NUC) to emulate network-attached storage.
Lines 18-29: create a PersistentVolumeClaim, used by pods to request physical storage.
Lines 31-49: create a sample Pod which attaches the task-pv-claim PVC.
Apply the manifest and, after a few moments, the Volume will show as Bound:
From here we can quickly test the setup by creating a text file under the
/mnt/data directory of the Intel NUC, and then trying to access it from the test pod:
Automate the Setup
The setup described in this post has been automated as part of
a modular Kubernetes Lab which provides an easy and streamlined way
(managed via please.build) to deploy a test cluster with support for different components.
In particular, you can refer to the Baremetal Setup page of the documentation for specific instructions.
I hope you found this post useful and interesting, and I’m keen to get feedback on it! If you find the information shared was useful, if something is missing, or if you have ideas on how to improve it, please let me know on Twitter.