Getting Started with Turing Pi#
Tip
For an interactive experience, try the AI-Guided Setup instead — Claude Code will walk you through these steps, configure files, generate secrets, and run the playbooks for you.
This tutorial walks you through setting up a K3s cluster on one or more Turing Pi v2.5 boards, from initial hardware setup to a fully deployed cluster with ArgoCD managing all services.
Prerequisites#
Hardware#
One or more Turing Pi v2.5 boards
Compute modules in each slot: RK1 (8GB+) or CM4 (4GB+)
Network cable connecting each Turing Pi to your router
SD card (≥8GB, ext4) inserted in each Turing Pi’s BMC SD slot
Optional: NVMe drives in the M.2 slots for OS migration
Software (on your workstation)#
Linux workstation (or WSL2 on Windows)
podman 4.3 or later (rootless container runtime)
VS Code with the Dev Containers extension
git
Note
Set the VS Code setting dev.containers.dockerPath to podman before proceeding.
Networking#
Your workstation and Turing Pi boards must be on the same subnet with these network features:
DHCP enabled (router assigns IPs automatically)
mDNS / zero-configuration networking enabled (so you can reach
turingpi.local)
Note
mDNS is enabled by default on macOS, most Linux desktop distributions (via
Avahi), and Windows 10+. If turingpi.local doesn’t resolve, check that
the avahi-daemon service is running on Linux (sudo systemctl start avahi-daemon)
or install it with sudo apt install avahi-daemon. On Windows, mDNS support
is built into the OS — no extra setup needed.
Verify your BMC is reachable:
ping turingpi.local
# Also try: turingpi, turingpi.lan, turingpi.broadband
Tip
After the first boot, assign fixed DHCP leases (by MAC address) to each node in your router’s DHCP settings. This prevents IP changes on reboot.
Step 1: Fork, clone, and generate SSH key#
Fork and clone the repository#
Fork the repository on GitHub: visit gilesknap/tpi-k3s-ansible and click Fork.
Clone your fork:
git clone https://github.com/<your-username>/tpi-k3s-ansible.git
cd tpi-k3s-ansible
Note
You need your own fork because ArgoCD tracks your repository for GitOps. Changes you push to your fork are automatically deployed to your cluster.
The repo contains SealedSecret files encrypted for the original cluster — these won’t work on yours and can be safely ignored until you create your own during Set Up DNS, TLS & Cloudflare Tunnel setup.
Generate an SSH keypair#
Create a dedicated keypair for Ansible to use when connecting to all nodes:
# Run this on your HOST machine (outside the devcontainer)
ssh-keygen -t rsa -b 4096 -C "ansible master key" -f $HOME/.ssh/ansible_rsa
Use a strong passphrase. Then copy the public key into the repo:
cp $HOME/.ssh/ansible_rsa.pub pub_keys/ansible_rsa.pub
Step 3: Open the devcontainer#
Open the devcontainer#
Open the repository in VS Code:
code .
When prompted, select “Reopen in Container” (or use
Ctrl+Shift+P → Dev Containers: Reopen in Container).
The devcontainer provides Ansible (and its Python dependencies) out of the box.
Cluster tools (kubectl, helm, kubeseal) are installed later by the tools role when
you run the playbook. No additional installation is needed on your workstation.
Step 4: Configure the inventory#
Edit hosts.yml to match your hardware. The default inventory describes a single Turing Pi
with four nodes:
turing_pis:
hosts:
turingpi: # BMC hostname (must be reachable via SSH)
vars:
ansible_user: "{{ tpi_user }}"
turingpi_nodes: # MUST be named <bmc_hostname>_nodes
hosts:
node01:
slot_num: 1 # Physical slot (1-4, slot 1 nearest coin battery)
type: pi4 # pi4 for CM4, rk1 for RK1
node02:
slot_num: 2
type: rk1
root_dev: /dev/nvme0n1 # Optional: migrate OS to NVMe
node03:
slot_num: 3
type: rk1
root_dev: /dev/nvme0n1
node04:
slot_num: 4
type: rk1
root_dev: /dev/nvme0n1
vars:
ansible_user: "{{ ansible_account }}"
all_nodes:
children:
turingpi_nodes:
Key points:
The node group name must be
<bmc_hostname>_nodes(e.g.turingpi_nodesfor BMC hostturingpi). The flash role uses this naming convention to discover nodes.slot_nummaps each node to its physical slot on the Turing Pi board.typedetermines which OS image to flash (rk1orpi4).root_devis optional — set it to migrate the OS from eMMC to NVMe after flashing.
Step 5: Configure the cluster#
Configure the cluster#
Edit group_vars/all.yml — the primary Ansible configuration:
# Change these to match your environment
control_plane: node01 # Which node is the K3s control plane
cluster_domain: example.com # Your domain name
domain_email: you@example.com # For Let's Encrypt certificates
repo_remote: https://github.com/<your-username>/tpi-k3s-ansible.git
repo_branch: main # Git branch for ArgoCD to track
Then edit kubernetes-services/values.yaml — the ArgoCD runtime configuration:
repo_branch: main # Must match the value in all.yml
# OAuth2 authentication gateway — leave false for initial setup.
# Enable after completing the OAuth guide (docs/how-to/oauth-setup).
enable_oauth2_proxy: false
# OAuth2 email allowlist — GitHub-linked emails allowed to access
# protected services. Remove the defaults and add your own:
oauth2_emails:
- you@example.com
# NFS configuration (optional — only needed for LLM features)
rkllama:
nfs:
server: 192.168.1.3 # Your NFS server IP
path: /path/to/rkllm/models # NFS export path for rkllm models
llamacpp:
nfs:
server: 192.168.1.3
path: /path/to/gguf/models
model:
file: "your-model.gguf"
Tip
If you do not have an NFS server or do not plan to use the LLM features (rkllama, llamacpp), you can leave the NFS settings as-is. The services will deploy but remain idle until configured.
Note
enable_oauth2_proxy controls whether cluster services require GitHub login.
Leave it false until you have completed the Set Up OAuth Authentication guide —
otherwise services like Grafana and Longhorn will return errors because the
OAuth proxy is not yet deployed.
Step 6: Run the playbook#
From a terminal inside the devcontainer:
ansible-playbook pb_all.yml -e do_flash=true
This single command:
tools— installs helm, kubectl, kubeseal in the devcontainerflash— flashes Ubuntu 24.04 to each compute module via the BMCknown_hosts— updates SSH known_hosts for all nodesservers— migrates OS to NVMe (if configured), dist-upgrades, installs dependenciesk3s— installs K3s control plane and worker nodescluster— installs ArgoCD and deploys all cluster services
The full process takes approximately 15–30 minutes depending on network speed and number of nodes.
Note
The -e do_flash=true flag is required for the initial flash. On subsequent runs, omit it
to skip flashing (the playbook will check existing state and skip completed steps).
The Ansible steps are idempotent, and will skip flashing nodes that already have Ubuntu installed. But ommitting do_flash protects against accidental re-flashing of nodes that are temporarily offline or have connectivity issues.
Step 7: Verify the cluster#
Verify the cluster#
After the playbook completes:
kubectl get nodes
Expected output shows all nodes in Ready state:
NAME STATUS ROLES AGE VERSION
node01 Ready control-plane,etcd,master 5m v1.31.x+k3s1
node02 Ready <none> 4m v1.31.x+k3s1
node03 Ready <none> 4m v1.31.x+k3s1
node04 Ready <none> 4m v1.31.x+k3s1
Check ArgoCD applications:
kubectl get applications -n argo-cd
All applications should eventually reach Synced and Healthy status.
Next Steps#
Bootstrap the Cluster — set up admin passwords and access cluster services
Set Up DNS, TLS & Cloudflare Tunnel — expose services to the internet via Cloudflare
Set Up OAuth Authentication — secure services with GitHub OAuth (enable
enable_oauth2_proxyafter setup)Manage Sealed Secrets — manage encrypted secrets in the repository
Architecture — understand how all the pieces fit together