Getting Started without Turing Pi#
Tip
For an interactive experience, try the AI-Guided Setup instead — Claude Code will walk you through these steps, configure files, generate secrets, and run the playbooks for you.
This tutorial walks you through deploying a K3s cluster on any set of Linux servers that are already running Ubuntu 24.04 LTS — no Turing Pi hardware required.
All Ansible roles except the BMC flashing step work identically on standalone servers, Intel NUCs, Raspberry Pis, VMs, or cloud instances.
Prerequisites#
Target Servers#
One or more Linux servers running Ubuntu 24.04 LTS
SSH access from your workstation to each server
All servers on the same subnet (or with routable network connectivity)
One server designated as the control plane; any extras are workers
Tip
A single server works fine — K3s runs as both control plane and worker. The
control-plane NoSchedule taint is automatically skipped when there is only
one node, so all workloads schedule on it.
Software (on your workstation)#
Linux workstation (or WSL2 on Windows)
podman 4.3 or later (rootless container runtime)
VS Code with the Dev Containers extension
git
Note
Set the VS Code setting dev.containers.dockerPath to podman before proceeding.
Step 1: Fork, clone, and generate SSH key#
Fork and clone the repository#
Fork the repository on GitHub: visit gilesknap/tpi-k3s-ansible and click Fork.
Clone your fork:
git clone https://github.com/<your-username>/tpi-k3s-ansible.git
cd tpi-k3s-ansible
Note
You need your own fork because ArgoCD tracks your repository for GitOps. Changes you push to your fork are automatically deployed to your cluster.
The repo contains SealedSecret files encrypted for the original cluster — these won’t work on yours and can be safely ignored until you create your own during Set Up DNS, TLS & Cloudflare Tunnel setup.
Generate an SSH keypair#
Create a dedicated keypair for Ansible to use when connecting to all nodes:
# Run this on your HOST machine (outside the devcontainer)
ssh-keygen -t rsa -b 4096 -C "ansible master key" -f $HOME/.ssh/ansible_rsa
Use a strong passphrase. Then copy the public key into the repo:
cp $HOME/.ssh/ansible_rsa.pub pub_keys/ansible_rsa.pub
Step 2: Open the devcontainer#
Open the devcontainer#
Open the repository in VS Code:
code .
When prompted, select “Reopen in Container” (or use
Ctrl+Shift+P → Dev Containers: Reopen in Container).
The devcontainer provides Ansible (and its Python dependencies) out of the box.
Cluster tools (kubectl, helm, kubeseal) are installed later by the tools role when
you run the playbook. No additional installation is needed on your workstation.
Step 3: Bootstrap Ansible access on your servers#
The pb_add_nodes.yml playbook creates an ansible user on each server with SSH key
authentication and passwordless sudo. You need an existing user account with SSH + sudo
access to run this initial bootstrap.
First, add your servers to hosts.yml under the extra_nodes group:
extra_nodes:
hosts:
server1: # Hostname or IP of your control plane
server2: # Worker node
server3: # Worker node
vars:
ansible_user: "{{ ansible_account }}"
all_nodes:
children:
extra_nodes: # Only extra_nodes — no turingpi_nodes needed
Note
You can remove or comment out the turing_pis and turingpi_nodes groups entirely
if you have no Turing Pi hardware.
Then run the bootstrap playbook:
ansible-playbook pb_add_nodes.yml
You will be prompted for:
Username — an existing SSH user on all servers
Password — that user’s password (used for initial sudo)
The playbook creates the ansible user with your SSH key and passwordless sudo on
every server in extra_nodes.
Step 4: Verify SSH access#
Confirm Ansible can reach all nodes with the new ansible user:
ansible all_nodes -m ping
Expected output:
server1 | SUCCESS => { "ping": "pong" }
server2 | SUCCESS => { "ping": "pong" }
server3 | SUCCESS => { "ping": "pong" }
Step 5: Configure the cluster#
Configure the cluster#
Edit group_vars/all.yml — the primary Ansible configuration:
# Change these to match your environment
control_plane: node01 # Which node is the K3s control plane
cluster_domain: example.com # Your domain name
domain_email: you@example.com # For Let's Encrypt certificates
repo_remote: https://github.com/<your-username>/tpi-k3s-ansible.git
repo_branch: main # Git branch for ArgoCD to track
Then edit kubernetes-services/values.yaml — the ArgoCD runtime configuration:
repo_branch: main # Must match the value in all.yml
# OAuth2 authentication gateway — leave false for initial setup.
# Enable after completing the OAuth guide (docs/how-to/oauth-setup).
enable_oauth2_proxy: false
# OAuth2 email allowlist — GitHub-linked emails allowed to access
# protected services. Remove the defaults and add your own:
oauth2_emails:
- you@example.com
# NFS configuration (optional — only needed for LLM features)
rkllama:
nfs:
server: 192.168.1.3 # Your NFS server IP
path: /path/to/rkllm/models # NFS export path for rkllm models
llamacpp:
nfs:
server: 192.168.1.3
path: /path/to/gguf/models
model:
file: "your-model.gguf"
Tip
If you do not have an NFS server or do not plan to use the LLM features (rkllama, llamacpp), you can leave the NFS settings as-is. The services will deploy but remain idle until configured.
Note
enable_oauth2_proxy controls whether cluster services require GitHub login.
Leave it false until you have completed the Set Up OAuth Authentication guide —
otherwise services like Grafana and Longhorn will return errors because the
OAuth proxy is not yet deployed.
Ensure control_plane matches one of the hostnames in your extra_nodes group.
Step 6: Run the playbook (skip flash)#
Since your servers already have an OS installed, skip the flash tag and run:
ansible-playbook pb_all.yml --tags known_hosts,servers,k3s,cluster
This runs only the relevant stages:
known_hosts— updates SSH known_hosts for all nodesservers— dist-upgrade, installs dependencies (open-iscsi, etc.)k3s— installs K3s control plane and worker nodescluster— installs ArgoCD and deploys all cluster services
Note
The tools tag installs helm, kubectl, and kubeseal into the devcontainer (localhost).
It is not run automatically by the devcontainer build — you must run it manually
(or include it in the playbook run, as shown above):
ansible-playbook pb_all.yml --tags tools
The installed binaries are placed in /root/bin which is backed by the iac2-bin
Docker volume, so they persist across container rebuilds.
What about move_fs?#
The move_fs role (OS migration to NVMe) runs as part of the servers tag. It only
activates if a node has root_dev defined in the inventory. If your servers are already
running from their desired disk, simply omit root_dev from the inventory and the
role does nothing.
Step 7: Verify the cluster#
Verify the cluster#
After the playbook completes:
kubectl get nodes
Expected output shows all nodes in Ready state:
NAME STATUS ROLES AGE VERSION
node01 Ready control-plane,etcd,master 5m v1.31.x+k3s1
node02 Ready <none> 4m v1.31.x+k3s1
node03 Ready <none> 4m v1.31.x+k3s1
node04 Ready <none> 4m v1.31.x+k3s1
Check ArgoCD applications:
kubectl get applications -n argo-cd
All applications should eventually reach Synced and Healthy status.
Next Steps#
Bootstrap the Cluster — set up admin passwords and access cluster services
Set Up DNS, TLS & Cloudflare Tunnel — expose services to the internet via Cloudflare
Set Up OAuth Authentication — secure services with GitHub OAuth (enable
enable_oauth2_proxyafter setup)Manage Sealed Secrets — manage encrypted secrets in the repository
Architecture — understand how all the pieces fit together