{"id":388,"date":"2025-11-30T12:23:31","date_gmt":"2025-11-30T10:23:31","guid":{"rendered":"https:\/\/jochem.jochemenbianca.nl\/?p=388"},"modified":"2025-12-02T22:48:47","modified_gmt":"2025-12-02T20:48:47","slug":"talos-terraform-op-proxmox","status":"publish","type":"post","link":"https:\/\/jochem.jochemenbianca.nl\/?p=388","title":{"rendered":"Talos met behulp van terraform op Proxmox"},"content":{"rendered":"\n<p>Om te gaan testen met kubernetes wilde ik een test omgeving hebben die ik snel op en af kan bouwen. Het meerst hanidige is dan om gebruik te maken van terraform. Met een terraform apply maak je dan een kubernetes cluster aan en met terraform destroy kan je hem dan weer helemaal afbreken. Er kunnen nog wat verbeteringen aan onderstaande worden aangebracht, maar het moest snel \ud83d\ude42<br>Zorg ervoor dat je een machine hebt waarop je terraform geinstalleerd hebt en die naar de proxmox node kan om API calls te doen. Zorg er dus ook voor dat je een user hebt die een API token heeft.<br>Verder moet je ook talosctl en kubectl installeren.<br>Maak ook in je DHCP server enkele reserveringen met de MAC addressen uit de main.tf en de ip adressen eraan gekoppeld. Aangezien we name,ijk de metal-amd64.iso gebruiken kunnen we geen ip adressen opgeven bij het starten van de talos VM.<br>Maak vervolgens het bestand main.tf aan met de volgende inhoud. Pas onder local de gegevens aan zoals je ze graag zou willen hebben.<\/p>\n\n\n<pre>\n<code>\n\nterraform {\n  required_providers {\n    proxmox = {\n      source  = \"telmate\/proxmox\"\n      version = \"3.0.2-rc04\"\n    }\n    talos = {\n      source  = \"siderolabs\/talos\"\n      version = \"0.9.0\"\n    }\n    local = {\n      source  = \"hashicorp\/local\"\n      version = \"~> 2.4\"\n    }\n  }\n}\n\n# variable \"proxmox_server\" {}\n# variable \"proxmox_tokenid\" {}\n# variable \"proxmox_api_token_secret\" {\n#   sensitive = true\n# }\n\nprovider \"proxmox\" {\n  pm_api_url          = \"https:\/\/${var.proxmox_server}:8006\/api2\/json\"\n  pm_api_token_id     = var.proxmox_tokenid\n  pm_api_token_secret = var.proxmox_api_token_secret\n  pm_tls_insecure     = true\n}\n\nlocals {\n  cluster_name   = \"talos-cluster\"\n  k8s_endpoint   = \"https:\/\/192.168.11.130:6443\"\n\n  iso_file       = \"ssd-disk1:iso\/metal-amd64.iso\"\n  proxmox_node   = \"pmx\"\n  network_bridge = \"vmbr0\"\n  disk_storage   = \"ssd-disk1\"\n  gateway        = \"192.168.11.1\"\n  subnet_cidr    = \"24\"\n\n  nodes = {\n    controlplane = {\n      ip     = \"192.168.11.130\"\n      vmid   = 140\n      role   = \"controlplane\"\n      cores  = 2\n      mac    = \"BC:24:11:21:6A:01\"\n      memory = 4096\n      disk   = \"\/dev\/sda\"\n      hostname = \"talos-cp\"\n    }\n    worker1 = {\n      ip     = \"192.168.11.131\"\n      vmid   = 141\n      role   = \"worker\"\n      cores  = 2\n      mac    = \"BC:24:11:D4:E0:F7\"\n      memory = 2048\n      disk   = \"\/dev\/sda\"\n      hostname = \"talos-w1\"\n    }\n    worker2 = {\n      ip     = \"192.168.11.132\"\n      vmid   = 142\n      role   = \"worker\"\n      cores  = 2\n      mac    = \"BC:24:11:51:D1:E1\"\n      memory = 2048\n      disk   = \"\/dev\/sda\"\n      hostname = \"talos-w2\"\n    }\n    worker3 = {\n      ip     = \"192.168.11.133\"\n      vmid   = 143\n      role   = \"worker\"\n      cores  = 2\n      mac    = \"BC:24:11:B7:09:E4\"\n      memory = 2048\n      disk   = \"\/dev\/sda\"\n      hostname = \"talos-w3\"\n    }\n  }\n}\n\n# --- Talos Secrets ---\nresource \"talos_machine_secrets\" \"machine_secrets\" {}\n\n# --- Talos Machine Configurations ---\ndata \"talos_machine_configuration\" \"config\" {\n  for_each         = local.nodes\n  cluster_name     = local.cluster_name\n  cluster_endpoint = local.k8s_endpoint\n  machine_type     = each.value.role\n  talos_version    = \"v1.7.0\"\n\n  machine_secrets = talos_machine_secrets.machine_secrets.machine_secrets\n\n  config_patches = [\n    yamlencode({\n      machine = {\n        install = {\n          disk       = each.value.disk\n          bootloader = true\n        }\n        network = {\n          nameservers = [\"192.168.10.1\"]\n          interfaces = [{\n            interface = \"eth0\"\n            dhcp      = false\n            addresses = [\"${each.value.ip}\/${local.subnet_cidr}\"]\n            routes    = [{\n              network = \"0.0.0.0\/0\"\n              gateway = local.gateway\n            }]\n          }]\n          hostname = each.value.hostname\n        }\n      }\n    })\n  ]\n}\n\n# --- Proxmox VM Creation ---\nresource \"proxmox_vm_qemu\" \"talos_nodes\" {\n  for_each    = local.nodes\n  name        = \"talos-${each.key}\"\n  target_node = local.proxmox_node\n  vmid        = each.value.vmid\n\n  cores  = each.value.cores\n  memory = each.value.memory\n  bios   = \"ovmf\"\n  scsihw = \"virtio-scsi-pci\"\n\n  disk {\n    slot    = \"scsi0\"\n    type    = \"disk\"\n    storage = local.disk_storage\n    size    = \"32G\"\n  }\n\n  network {\n    id      = 0\n    bridge  = local.network_bridge\n    model   = \"virtio\"\n    macaddr = each.value.mac\n  }\n\n  disk {\n    slot = \"ide2\"\n    type = \"cdrom\"\n    iso  = local.iso_file\n  }\n\n  boot  = \"order=scsi0;ide2\"\n  agent = 0\n}\n\n# --- Talos Client Config ---\ndata \"talos_client_configuration\" \"client\" {\n  cluster_name         = local.cluster_name\n  client_configuration = talos_machine_secrets.machine_secrets.client_configuration\n  endpoints            = [local.nodes.controlplane.ip]\n  nodes                = [for n in local.nodes : n.ip]\n}\n\nresource \"local_file\" \"talosconfig_file\" {\n  content  = data.talos_client_configuration.client.talos_config\n  filename = \"${path.module}\/talosconfig\"\n}\n\n# # --- Apply Configurations ---\n# resource \"talos_machine_configuration_apply\" \"apply\" {\n#   for_each = local.nodes\n#   client_configuration        = talos_machine_secrets.machine_secrets.client_configuration\n#   machine_configuration_input = data.talos_machine_configuration.config[each.key].machine_configuration\n#   node                        = each.value.ip\n#   apply_mode                  = \"auto\"\n\n#   depends_on = [\n#     proxmox_vm_qemu.talos_nodes,\n#     local_file.talosconfig_file\n#   ]\n# }\n\n\n# Apply CP first\nresource \"talos_machine_configuration_apply\" \"controlplane\" {\n  for_each                   = { for k,v in local.nodes : k => v if v.role == \"controlplane\" }\n  client_configuration        = talos_machine_secrets.machine_secrets.client_configuration\n  machine_configuration_input = data.talos_machine_configuration.config[each.key].machine_configuration\n  node                        = each.value.ip\n  apply_mode                  = \"auto\"\n  # Optional: give install time\n  # timeouts { create = \"20m\" } # adjust if your storage is slow\n}\n\n# Apply workers after CP\nresource \"talos_machine_configuration_apply\" \"workers\" {\n  depends_on                  = [talos_machine_configuration_apply.controlplane]\n  for_each                    = { for k,v in local.nodes : k => v if v.role == \"worker\" }\n  client_configuration        = talos_machine_secrets.machine_secrets.client_configuration\n  machine_configuration_input = data.talos_machine_configuration.config[each.key].machine_configuration\n  node                        = each.value.ip\n  apply_mode                  = \"auto\"\n  # timeouts { create = \"20m\" }\n}\n\n\n# --- Update Boot Order & Reboot via Proxmox API ---\n# resource \"null_resource\" \"update_boot_order\" {\n#   depends_on = [talos_machine_configuration_apply.workers]\n\n#   provisioner \"local-exec\" {\n#     command = <<-EOT\n#       echo \"Updating boot order to disk-only and rebooting VMs via Proxmox API...\"\n#       for vmid in ${join(\" \", [for n in local.nodes : n.vmid])}; do\n#         curl -s -k -X PUT \\\n#           -H \"Authorization: PVEAPIToken=${var.proxmox_tokenid}=${var.proxmox_api_token_secret}\" \\\n#           -d \"boot=order=scsi0\" \\\n#           \"https:\/\/${var.proxmox_server}:8006\/api2\/json\/nodes\/${local.proxmox_node}\/qemu\/$vmid\/config\"\n#         curl -s -k -X POST \\\n#           -H \"Authorization: PVEAPIToken=${var.proxmox_tokenid}=${var.proxmox_api_token_secret}\" \\\n#           \"https:\/\/${var.proxmox_server}:8006\/api2\/json\/nodes\/${local.proxmox_node}\/qemu\/$vmid\/status\/reboot\"\n#       done\n#     EOT\n#   }\n# }\n\n# --- Bootstrap Cluster ---\nresource \"talos_machine_bootstrap\" \"bootstrap\" {\n  depends_on          = [talos_machine_configuration_apply.workers]\n  node                = local.nodes.controlplane.ip\n  client_configuration = talos_machine_secrets.machine_secrets.client_configuration\n  endpoint            = local.nodes.controlplane.ip\n}\n\n# --- Retrieve Kubeconfig ---\ndata \"talos_cluster_kubeconfig\" \"kubeconfig\" {\n  depends_on           = [talos_machine_bootstrap.bootstrap]\n  node                 = local.nodes.controlplane.ip\n  client_configuration = talos_machine_secrets.machine_secrets.client_configuration\n}\n\nresource \"local_file\" \"kubeconfig_file\" {\n  content  = data.talos_cluster_kubeconfig.kubeconfig.kubeconfig_raw\n  filename = \"${path.module}\/kubeconfig\"\n}\n\noutput \"config\" {\n  value     = data.talos_client_configuration.client.talos_config\n  sensitive = true\n}\n\noutput \"kubeconfig\" {\n  value     = data.talos_cluster_kubeconfig.kubeconfig.kubeconfig_raw\n  sensitive = true\n}\n\n<\/code>\n<\/pre>\n\n\n<p>Vervolgens maken we een bestand aan terraform.tfvars met de volgende inhoud. Pas ook hier weer alles aan zoals je het graag zou willen hebben:<\/p>\n\n\n<pre>\n<code>\nproxmox_server   = \"192.168.11.12\"\nproxmox_user     = \"root\"\nproxmox_password = \"Password\"  # Sensitive data\nproxmox_tokenid  = \"root@pam!terraform\" \nproxmox_api_token_secret = \"secret_token_api\" # Sensitive data\nproxmox_node     = \"pmx\"\nstorage_name     = \"ssd-disk1\"  # Or whatever your actual storage name is\niso_file         = \"ssd-disk1:iso\/metal-amd64.iso\"  # Optional, you can modify as needed\n\n\n<\/code>\n<\/pre>\n\n\n<p>En nog een variables.tf met de volgende inhoud<\/p>\n\n\n<pre><code>\nvariable \"proxmox_server\" {\n  description = \"The Proxmox server's IP or hostname\"\n  type        = string\n}\n\nvariable \"proxmox_user\" {\n  description = \"The Proxmox username\"\n  type        = string\n}\n\nvariable \"proxmox_password\" {\n  description = \"The Proxmox password\"\n  type        = string\n  sensitive   = true\n}\n\nvariable \"proxmox_api_token_secret\" {\n  description = \"The Proxmox secret\"\n  type        = string\n  sensitive   = true\n}\n\nvariable \"proxmox_tokenid\" { \n  description = \"The Proxmox tokenid\"\n  type        = string\n  sensitive   = true\n}\n\nvariable \"proxmox_node\" {\n  description = \"The Proxmox node to deploy the VMs\"\n  type        = string\n}\n\nvariable \"storage_name\" {\n  description = \"The storage resource name in Proxmox\"\n  type        = string\n}\n\nvariable \"iso_file\" {\n  description = \"The path to the Talos ISO file\"\n  type        = string\n  default     = \"iso\/metal-amd64.iso\"  # Optional default value\n}\n\n<\/code><\/pre>\n\n\n<p>En daarna kunnen we een <code>terraform init<\/code>, <code>terraform validate<\/code>, <code>terraform plan <\/code>en <code>terraform apply <\/code>doen. Ons cluster wordt dan opgebouwd.<br>Om vervolgens kubectl en talosctl, welke je ook moet installeren, te gebruiken kun je de volgende commando's gebruiken vanuit de directopry van waarin je terraform apply gestart hebt;<br><code>kubectl --kubeconfig kubeconfig get nodes<\/code> <br>en <br><code>talosctl --talosconfig .\/talosconfig -n 192.168.11.131 -e 192.168.11.130 health<\/code><br>De bestanden kubeconfig en talosconfig worden aagemaakt met de terraform apply.<br><br>Het is handig om de pods ook van buiten het clustyer bereikbaar te maken. Omdat het om een baremetal omgeving gaat is daar metallb handig voor. Om die te kunnen gebruiken doen we het volgende:<br><code>kubectl apply --kubeconfig kubeconfig -f https:\/\/raw.githubusercontent.com\/metallb\/metallb\/v0.14.5\/config\/manifests\/metallb-native.yaml<\/code><br><br>Daarna maken we een ipaddresses.yml aan met de volgende content:<\/p>\n\n\n<pre><code>\napiVersion: metallb.io\/v1beta1\nkind: IPAddressPool\nmetadata:\n  name: first-pool\n  namespace: metallb-system\nspec:\n  addresses:\n  - 192.168.11.136-192.168.11.139\n<\/code><\/pre>\n\n\n<p>En een bestand layer2.yml\n<\/p>\n\n\n<pre><code>\napiVersion: metallb.io\/v1beta1\nkind: L2Advertisement\nmetadata:\n  name: first-pool\n  namespace: metallb-system\n<\/code><\/pre>\n\n\n<p>En die gebruiken we dan in het volgende commando's:<br><code>kubectl create -f metallb\/ipaddresses.yml --kubeconfig kubeconfig<\/code><br><code>kubectl create -f metallb\/layer2.yml --kubeconfig kubeconfig<\/code><\/p>\n\n\n\n<p>Leuk om ook iets te laten draaien via deze load balancer. We doen een nginx deployment<br><code>kubectl create deploy nginx --image nginx:latest --kubeconfig kubeconfig<\/code><br>En exposen deze dan via de loadbalancer met;<br><code>kubectl expose deploy nginx --port 80 --type LoadBalancer --kubeconfig kubeconfig<\/code><br>Als alles goed is gegaan kunnen we nu met het volgende commando kijken wat het \"externe\" ip adres is:<br><code>kubectl get svc --kubeconfig kubeconfig<\/code><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Om te gaan testen met kubernetes wilde ik een test omgeving hebben die ik snel op en af kan bouwen. Het meerst hanidige is dan om gebruik te maken van terraform. Met een terraform apply maak je dan een kubernetes cluster aan en met terraform destroy kan je hem dan [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4,3,10,9],"tags":[],"class_list":["post-388","post","type-post","status-publish","format-standard","hentry","category-kubernetes","category-open-source","category-proxmox","category-terraform"],"_links":{"self":[{"href":"https:\/\/jochem.jochemenbianca.nl\/index.php?rest_route=\/wp\/v2\/posts\/388","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jochem.jochemenbianca.nl\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jochem.jochemenbianca.nl\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jochem.jochemenbianca.nl\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/jochem.jochemenbianca.nl\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=388"}],"version-history":[{"count":7,"href":"https:\/\/jochem.jochemenbianca.nl\/index.php?rest_route=\/wp\/v2\/posts\/388\/revisions"}],"predecessor-version":[{"id":398,"href":"https:\/\/jochem.jochemenbianca.nl\/index.php?rest_route=\/wp\/v2\/posts\/388\/revisions\/398"}],"wp:attachment":[{"href":"https:\/\/jochem.jochemenbianca.nl\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=388"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jochem.jochemenbianca.nl\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=388"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jochem.jochemenbianca.nl\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=388"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}