Introduction
Recently I started experimenting with Proxmox Virtual Environment, while also evaluating it as a replacement for VMWare ESXi. This post discusses some of my experiences with configuring a Proxmox VE “lab environment” of virtual machines on a separate network segment, using Ansible and Terraform. For me at least, trying this stuff out and blogging about it helps motivate me to learn the stuff and document what I have learned. Like a lot of my posts, this one will delve into different topics and is not meant as a how-to guide (there are better examples of this out there); it is mainly to document what I’ve been working on. I do hope, however, that this provides ideas for someone else and perhaps they can improve on what I’ve done here.
Host setup
For this exercise I configured two systems, each of which had two NICs: one on my home network and the other on the isolated lab network. The NICs on my home network are connected using an extra unmanaged (a.k.a. dumb) switch, but I also tried the same setup on a Cisco Catalyst switch with access ports. The hosts were initially built with Debian 13/Trixie and then Proxmox VE 9 was installed on top of Debian using an Ansible playbook, similar to what I did in my last post. It isn’t necessary to set up Proxmox VE this way of course; you can also install it from the ISO. The below code snippets contain my inventory file, network interfaces template, and Ansible playbook.
[proxmox] proxmox1 br0_nic=enp2s0 br1_nic=enp3s0 proxmox2 br0_nic=enp1s0 br1_nic=enp2s0 [leader] proxmox1 [followers] proxmox2
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
auto {{ br0_nic }}
iface {{ br0_nic }} inet manual
auto {{ br1_nic }}
iface {{ br1_nic }} inet manual
auto vmbr0
iface vmbr0 inet static
address {{ ansible_facts['default_ipv4']['address'] }}/{{ ansible_facts['default_ipv4']['prefix'] }}
gateway {{ ansible_facts['default_ipv4']['gateway'] }}
bridge-ports {{ br0_nic }}
bridge-stp off
bridge-fd 0
auto vmbr1
iface vmbr1 inet manual
bridge-ports {{ br1_nic }}
bridge-stp off
bridge-fd 0
---
- hosts: proxmox
tasks:
- name: Check if virtualization extensions are enabled
ansible.builtin.shell: "grep -q -E 'svm|vmx' /proc/cpuinfo"
register: grep_cpuinfo_result
- name: Fail if virtualization extensions are disabled
ansible.builtin.fail:
msg: 'Virtualization extensions are disabled'
when: grep_cpuinfo_result.rc > 0
- name: Set root password
ansible.builtin.user:
name: root
password: '$6$WURnD5v2tOP7DCFA$OMTFPoI5jiyGS.Y8JvuWS7mo2HOZ0XKrIsMHRoJpkRoRpU0L3duylSotOebfLgVeVVE8du5AjJTtv...aAVyu1'
- name: Add Proxmox VE repository key
ansible.builtin.get_url:
url: https://enterprise.proxmox.com/debian/proxmox-archive-keyring-trixie.gpg
dest: /usr/share/keyrings/proxmox-archive-keyring.gpg
checksum: sha256:136673be77aba35dcce385b28737689ad64fd785a797e57897589aed08db6e45
- name: Add Proxmox VE repository
ansible.builtin.deb822_repository:
name: pve-install-repo
types: deb
uris: http://download.proxmox.com/debian/pve
suites: trixie
components: pve-no-subscription
signed_by: /usr/share/keyrings/proxmox-archive-keyring.gpg
- name: Install Proxmox kernel
ansible.builtin.apt:
name: proxmox-default-kernel
state: present
update_cache: true
- name: Reboot system to load Proxmox kernel
ansible.builtin.reboot:
when: ansible_facts['kernel'] is not match(".*-pve")
- name: Re-gather facts
ansible.builtin.setup:
- name: Deploy networking configuration
ansible.builtin.template:
src: dualnics_interfaces.j2
dest: /etc/network/interfaces
backup: true
notify: Run ifreload
- name: Remove Debian kernel and os-prober
ansible.builtin.apt:
name:
- linux-image-amd64
- linux-image-6.12*
- os-prober
state: absent
when: ansible_facts['kernel'] is match(".*-pve")
- name: Install Proxmox VE packages
ansible.builtin.apt:
name:
- chrony
- open-iscsi
- postfix
- proxmox-ve
state: present
- name: Remove pve-enterprise repository
ansible.builtin.deb822_repository:
name: pve-enterprise
state: absent
- name: Disable the subscription message
ansible.builtin.replace:
path: /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
regexp: 'data\.status\.toLowerCase\(\) !== .active.'
replace: 'false'
backup: true
notify: Restart pveproxy
handlers:
- name: Run ifreload
ansible.builtin.command: /usr/sbin/ifreload -a
- name: Restart pveproxy
ansible.builtin.systemd:
name: pveproxy
state: restarted
After getting Proxmox VE installed and running on the two hosts, I configured them as a cluster. This can be done in the GUI, of course, or with the Proxmox VE CLI, but I decided to do so using the Community.Proxmox collection for Ansible. I installed this on my Ansible control host with the command ansible-galaxy collection install community.proxmox. This took a bit of trial and error to get working, and it probably doesn’t save anything compared to doing it in the GUI. Notably, I had to figure out how to populate a fact from the first host with the cluster fingerprint and access it on any subsequent hosts. For production use, the root password should be encrypted with Ansible Vault!
--- cluster_name: 'labcluster' nfs_name: 'nfs_shared' nfs_ip: '192.168.1.25' nfs_export: '/home/proxmox' root_password: 'encrypt_me_with_ansible_vault_please'
---
- hosts: proxmox
gather_facts: false
tasks:
- name: Install python3-proxmoxer
ansible.builtin.apt:
name: python3-proxmoxer
state: present
- hosts: leader
tasks:
- name: Create a Proxmox VE Cluster
community.proxmox.proxmox_cluster:
state: present
api_host: "{{ ansible_facts['hostname'] }}"
api_user: root@pam
api_password: "{{ root_password }}"
validate_certs: false
link0: "{{ ansible_facts['default_ipv4']['address'] }}"
cluster_name: "{{ cluster_name }}"
- name: Get cluster info
community.proxmox.proxmox_cluster_join_info:
api_host: "{{ ansible_facts['hostname'] }}"
api_user: root@pam
api_password: "{{ root_password | default(omit) }}"
api_token_id: "{{ token_id | default(omit) }}"
api_token_secret: "{{ token_secret | default(omit) }}"
register: proxmox_cluster_join
- name: Set cluster info fact
ansible.builtin.set_fact:
proxmox_cluster_fp: "{{ proxmox_cluster_join['cluster_join']['nodelist'][0]['pve_fp'] }}"
cluster_nodes: "{{ proxmox_cluster_join['cluster_join']['nodelist'] | map(attribute='name') }}"
- name: Mount NFS Storage
ansible.builtin.command: "/usr/sbin/pvesm add nfs {{ nfs_name }} --server {{ nfs_ip }} --export {{ nfs_export }} --content \"images,rootdir,vztmpl,iso\""
when: "ansible_mounts | selectattr('mount', 'equalto', '/mnt/pve/' ~ nfs_name) | list | length == 0"
- hosts: followers
tasks:
- name: Join host to cluster
community.proxmox.proxmox_cluster:
state: present
api_host: "{{ ansible_facts['hostname'] }}"
api_user: root@pam
api_password: "{{ root_password }}"
validate_certs: false
fingerprint: "{{ hostvars[groups['leader'][0]]['proxmox_cluster_fp'] }}"
master_ip: "{{ hostvars[groups['leader'][0]]['ansible_facts']['default_ipv4']['address'] }}"
cluster_name: "{{ cluster_name }}"
when: ansible_facts['hostname'] not in hostvars[groups['leader'][0]]['cluster_nodes']
Before I proceed, I have a few comments on the Community.Proxmox modules. First, adding additional hosts to the cluster using the community.proxmox.proxmox_cluster module isn’t an idempotent (yes I had to look up how to spell that) operation, meaning once the host is added, it will attempt to add it again on subsequent runs of the playbook. I had to hack around this limitation by creating a fact containing the list of cluster nodes, and skipping over the task if the hostname is present in that list. Second, the community.proxmox.proxmox_storage module isn’t usable, at least for me anyway. It requires specifying the nodes parameter when adding storage, with a list of nodes to add the storage to, without an option for adding it to all nodes. This is particularly annoying when adding shared storage like an NFS mount. I thought I found a way around this limitation by passing in list of cluster nodes, but if a new node is added, the storage mount does not get added to it. I eventually just gave up and added my NFS mount using the command module.
Setting up an API user for Terraform
For provisioning systems, I used the Telmate/Proxmox provider for Terraform. The documentation page for the provider includes steps for creating an API user for use with Terraform. You can use the root account also for this purpose, but I thought I would try following the best practices in the instructions.
First, I logged into one of my Proxmox VE hosts and ran the following commands as instructed. Note: these are different for Proxmox 8 and older, and may change with subsequent versions:
sudo pveum role add TerraformProv -privs "Datastore.AllocateSpace Datastore.AllocateTemplate Datastore.Audit Pool.Allocate Pool.Audit Sys.Audit Sys.Console Sys.Modify VM.Allocate VM.Audit VM.Clone VM.Config.CDROM VM.Config.Cloudinit VM.Config.CPU VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Migrate VM.PowerMgmt SDN.Use" sudo pveum user add terraform-prov@pve --password apipassword sudo pveum aclmod / -user terraform-prov@pve -role TerraformProv
On my desktop where Terraform is installed, I chose to create a Bash dotfile, ~/.proxmox, that prompts for the Proxmox API password when sourced and exports the environment variables used by the provider. This ensures that the password isn’t insecurely stored somewhere.
export PM_USER="terraform-prov@pve"
read -s -p "Enter the password for ${PM_USER}: " PM_PASS
export PM_PASS
echo
Then source the file with source ~/.proxmox whenever you need to use Terraform with Proxmox VE. This of course is optional, and you can put the environment variables PM_USER and PM_PASS in your .bash_profile.
Creating an image template
For the virtual machines images, I tried out both the Debian 13 and Ubuntu 24.04 pre-built cloud images. The Debian 13 image can be downloaded here (debian-13-generic-amd64.qcow2) and the Ubuntu one can be found here. I saved these to the NFS share mounted on both hosts (/mnt/pve/nfs_shared/images in my case) and then created a template with the following commands:
# Create the VM qm create 9000 --memory 2048 --net0 virtio,bridge=vmbr0 --scsihw virtio-scsi-pci # Import the image qm set 9000 --scsi0 nfs_shared:0,import-from=/mnt/pve/nfs_shared/images/debian-13-generic-amd64.qcow2,format=qcow2 # Add the cloud-init drive qm set 9000 --ide2 nfs_shared:cloudinit # Set boot order qm set 9000 --boot order=scsi0 # Enable serial console qm set 9000 --serial0 socket --vga serial0 # Create template qm template 9000 qm set 9000 --name "debian13-template"
Terraform-ing a “lab”
The lab environment consists of the following virtual machines, spread across a cluster of two Proxmox VE hosts. I performed this exercise with both Debian 13 and Ubuntu 24.04 virtual machines.
- Two “router” VMs, one on each host. These VMs provide DNS and gateway services to the environment. They have two NICs, one on my home network and the other on lab network. They also act as the inbound jump hosts into the environment. More on the configuration of these later.
- Three VMs for MariaDB Galera, a multi-master database server solution that requires a minimum of three systems. galera2 is located on the second VM host.
- Two load balancer VMs running Keepalived and HAProxy.
- Two web servers running Nginx and PHP-FPM.
I borrowed a lot of ideas for my Terraform layout from this post on the Proxmox forum. In addition to provisioning the VMs, I also used Terraform templates to create my Ansible inventory file and DNS zone file. Below is the list of files used by Terraform:
- provider.tf: contains the provider to be installed. It can also be put at the top of main.tf.
- variables.tf: a list of site-specific variables used by the project.
- locals.tf: merged variables.
- main.tf: the main Terraform code file, containing all resources.
- zone_file.tftpl: a Bind zone file template, which gets deployed to the router VMs with Ansible
- inventory.tftpl: an Ansible inventory file template.
The provider.tf file is the simplest one. It probably isn’t necessary; you can put the provider block at the top of main.tf.
terraform {
required_providers {
proxmox = {
source = "Telmate/proxmox"
version = "3.0.2-rc07"
}
}
}
variables.tf contains string and map variables. I’ve broken out types of VMs into their own maps, such as routers, load balancers, etc., as they have different default values. This also allows placement of VMs of the same type on different hosts. These maps are merged together in locals.tf for use in the templates.
variable "proxmox_host" {
type = string
default = "proxmoxvm1.ridpath.mbr"
}
variable "domain" {
type = string
default = "ridpath.lab"
}
variable "ssh_key" {
type = string
default = "ssh-ed25519 redacted lab_key"
}
variable "vm_template" {
type = string
default = "debian13-template"
}
variable "ext_gw" {
type = string
default = "192.168.1.1"
}
variable "int_gw" {
type = string
default = "192.168.20.1"
}
variable "nameservers" {
type = list(string)
default = ["192.168.20.2","192.168.20.3"]
}
variable "router_vms" {
type = map(object({
ip = string,
ext_ip = string,
vmhost = optional(string, "proxmoxvm1")
}))
default = {
"router1" = {
ext_ip = "192.168.1.79"
ip = "192.168.20.2"
},
"router2" = {
ext_ip = "192.168.1.73"
ip = "192.168.20.3"
vmhost = "proxmoxvm2"
}
}
}
variable "galera_vms" {
type = map(object({
ip = string,
vmhost = optional(string, "proxmoxvm1"),
os_disk = optional(number, 10),
data_disk = optional(number, 30)
}))
default = {
"galera1" = {
ip = "192.168.20.20"
},
"galera2" = {
ip = "192.168.20.21"
vmhost = "proxmoxvm2"
},
"galera3" = {
ip = "192.168.20.22"
},
}
}
variable "lb_vms" {
type = map(object({
ip = string,
vmhost = optional(string, "proxmoxvm1"),
disk = optional(number, 10),
ram = optional(number, 1024),
cpu = optional(number, 1)
}))
default = {
"lb1" = {
ip = "192.168.20.23"
},
"lb2" = {
ip = "192.168.20.24"
vmhost = "proxmoxvm2"
}
}
}
variable "www_vms" {
type = map(object({
ip = string,
vmhost = optional(string, "proxmoxvm1"),
disk = optional(number, 10),
ram = optional(number, 1024),
cpu = optional(number, 1)
}))
default = {
"www1" = {
ip = "192.168.20.30"
cpu = 2
ram = 2048
disk = 30
},
"www2" = {
ip = "192.168.20.31"
vmhost = "proxmoxvm2"
cpu = 2
ram = 2048
disk = 30
}
}
}
locals {
all_vms = merge(var.router_vms,var.galera_vms,var.lb_vms,var.www_vms)
}
locals {
merged_vms = merge(var.lb_vms,var.www_vms)
}
Next, the two templates.
%{ for key, params in merge(merged_vms,galera_vms) ~}
${key} ansible_host=${params.ip}
%{ endfor ~}
[galera]
%{ for key in keys(galera_vms) ~}
${key}
%{ endfor ~}
[lb]
%{ for key in keys(lb_vms) ~}
${key}
%{ endfor ~}
[routers]
%{ for key, params in router_vms ~}
${key} ansible_host=${params.ext_ip}
%{ endfor ~}
[www]
%{ for key in keys(www_vms) ~}
${key}
%{ endfor ~}
$TTL 300
@ IN SOA router.${domain}. root.${domain}. (
2026010700 ; Serial
300 ; refresh (1 hour)
600 ; retry (10 minutes)
1209600 ; expire (2 weeks)
300 ; minimum (1 hour)
)
A ${int_gw}
NS router.ridpath.mbr.
router IN A ${int_gw}
%{ for key, params in all_vms ~}
${key} IN A ${params.ip}
%{ endfor ~}
main.tf is the file that puts it all together:
provider "proxmox" {
pm_api_url = "https://${var.proxmox_host}:8006/api2/json"
pm_tls_insecure = true
}
resource "local_file" "ansible_inventory" {
filename = "${path.module}/inventory"
content = templatefile("${path.module}/inventory.tftpl", {
merged_vms = local.merged_vms,
galera_vms = var.galera_vms,
router_vms = var.router_vms,
lb_vms = var.lb_vms,
www_vms = var.www_vms
})
file_permission = 0640
}
resource "local_file" "zone_file" {
filename = "${path.module}/zone_file"
content = templatefile("${path.module}/zone_file.tftpl", {
all_vms = local.all_vms,
domain = var.domain,
int_gw = var.int_gw
})
file_permission = 0640
}
resource "proxmox_vm_qemu" "router_vms" {
for_each = var.router_vms
name = each.key
target_node = each.value.vmhost
clone = var.vm_template
full_clone = false
memory = 2048
scsihw = "virtio-scsi-pci"
boot = "order=scsi0"
agent = 1
# cloud-init settings
ciuser = "ansible"
sshkeys = var.ssh_key
ipconfig0 = "ip=${each.value.ext_ip}/24,gw=${var.ext_gw}"
ipconfig1 = "ip=${each.value.ip}/24"
nameserver = "127.0.0.1"
searchdomain = var.domain
cpu {
cores = 2
}
network {
id = 0
bridge = "vmbr0"
model = "virtio"
}
network {
id = 1
bridge = "vmbr1"
model = "virtio"
}
serial {
id = 0
}
vga {
type = "serial0"
}
disk {
slot = "ide0"
type = "cloudinit"
storage = "local"
}
disk {
type = "disk"
slot = "scsi0"
size = "20G"
storage = "local"
format = "qcow2"
}
startup_shutdown {
order = -1
shutdown_timeout = -1
startup_delay = -1
}
}
resource "proxmox_vm_qemu" "galera_vms" {
for_each = var.galera_vms
name = each.key
target_node = each.value.vmhost
clone = var.vm_template
full_clone = false
memory = 8192
scsihw = "virtio-scsi-pci"
boot = "order=scsi0"
agent = 1
# cloud-init settings
ciuser = "ansible"
sshkeys = var.ssh_key
ipconfig0 = "ip=${each.value.ip}/24,gw=${var.int_gw}"
nameserver = join(" ", var.nameservers)
searchdomain = var.domain
cpu {
cores = 2
}
network {
id = 0
bridge = "vmbr1"
model = "virtio"
}
serial {
id = 0
}
vga {
type = "serial0"
}
disk {
slot = "ide0"
type = "cloudinit"
storage = "local"
}
disk {
type = "disk"
slot = "scsi0"
size = "${each.value.os_disk}G"
storage = "local"
format = "qcow2"
}
disk {
type = "disk"
slot = "scsi1"
size = "${each.value.data_disk}G"
storage = "local"
format = "qcow2"
}
startup_shutdown {
order = -1
shutdown_timeout = -1
startup_delay = -1
}
}
resource "proxmox_vm_qemu" "merged_vms" {
for_each = local.merged_vms
name = each.key
target_node = each.value.vmhost
clone = var.vm_template
full_clone = false
memory = each.value.ram
scsihw = "virtio-scsi-pci"
boot = "order=scsi0"
agent = 1
# cloud-init settings
ciuser = "ansible"
sshkeys = var.ssh_key
ipconfig0 = "ip=${each.value.ip}/24,gw=${var.int_gw}"
nameserver = var.int_gw
searchdomain = var.domain
cpu {
cores = each.value.cpu
}
network {
id = 0
bridge = "vmbr1"
model = "virtio"
}
serial {
id = 0
}
vga {
type = "serial0"
}
disk {
slot = "ide0"
type = "cloudinit"
storage = "local"
}
disk {
type = "disk"
slot = "scsi0"
size = "${each.value.disk}G"
storage = "local"
format = "qcow2"
}
startup_shutdown {
order = -1
shutdown_timeout = -1
startup_delay = -1
}
}
The Terraform project is initialized with the command terraform init, then applied with terraform apply. terraform plan can also be used to view changes prior to applying them.
Configuring the router VMs with Ansible
The router VMs act as the gateways into and out of the environment, as each has interfaces on both the local network and lab network. Each Proxmox VE host has a router VM, with router1 being the primary one; services will fail over to router2 if router1 goes down. Both VMs provide DNS services, while only router1 provides DHCP services, as this isn’t an important service in this environment, with the VMs having static IP addresses. router1 also functions as an SSH proxy host for Ansible, allowing Ansible to be run outside the environment, such as from my desktop where Terraform is run.
The routers need certain variables for the Ansible tasks. These are in group_vars/all and group_vars/routers. ansible_ssh_common_args is set to empty in group_vars/routers. The dnssec_exceptions variable is optional. It allows excluding certain domains from DNSSEC validation, such as an internal domain.
--- apt_proxy: 'apt-proxy.ridpath.mbr' ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p -q ansible@192.168.1.79"' subnet: '192.168.20'
--- ansible_ssh_common_args: '' dhcp_dns_servers: - '192.168.20.2' - '192.168.20.3' dns_forwarders: - '192.168.1.1' dnssec_exceptions: - 'ridpath.mbr' domain: 'ridpath.lab' keepalived_interface: 'eth1' nft_lan_dev: 'eth0' nft_lab_dev: 'eth1' virtual_router_id: 1 vip_addresses: - '192.168.20.1' vrrp_instance: 'ROUTER'
The playbook for the router servers includes Ansible roles: four common to both servers, and two applied to router1 only. Below is the code snippet for router.yml, as well as a directory tree for the included roles:
---
- hosts: routers
roles:
- base
- nftables
- keepalived
- bind
- hosts: router1
roles:
- dhcp
- ssh_proxy_host
├── base
│ └── tasks
│ └── main.yml
├── bind
│ ├── defaults
│ │ └── main.yml
│ ├── handlers
│ │ └── main.yml
│ ├── tasks
│ │ └── main.yml
│ └── templates
│ ├── named.conf.local.j2
│ └── named.conf.options.j2
├── dhcp
│ ├── handlers
│ │ └── main.yml
│ ├── tasks
│ │ └── main.yml
│ └── templates
│ ├── dhcpd.conf.j2
│ └── dhcpd-reservations.conf.j2
├── keepalived
│ ├── defaults
│ │ └── main.yml
│ ├── handlers
│ │ └── main.yml
│ ├── tasks
│ │ └── main.yml
│ ├── templates
│ │ └── keepalived.conf.j2
│ └── vars
│ └── main.yml
├── nftables
│ ├── handlers
│ │ └── main.yml
│ ├── tasks
│ │ └── main.yml
│ └── templates
│ └── nftables.conf.j2
└── ssh_proxy_host
├── files
│ └── ssh_config
└── tasks
└── main.yml
The base role is mainly used to enable my apt-cacher-ng proxy in the apt config on all of the hosts. You can probably omit this if saving bandwidth isn’t a concern for you. I also use it to install the qemu-guest-agent.
---
- name: Set deb mirrors to http
ansible.builtin.copy:
dest: "/etc/apt/mirrors/{{ item }}.list"
content: "http://deb.debian.org/{{ item }}\n"
loop:
- debian
- debian-security
when: ansible_facts['distribution'] == 'Debian'
- name: Set Debian proxy
ansible.builtin.copy:
dest: /etc/apt/apt.conf.d/99proxy
content: "Acquire::http::Proxy \"http://{{ apt_proxy }}:3142\";\n"
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
- name: Install qemu-guest-agent
ansible.builtin.apt:
name: qemu-guest-agent
state: present
Below are all of the files used by the Bind DNS role. Of note, if the systemd-resolved service is enabled (which is the case in the Debian cloud image), the role will need to disable the DNS stub listener, since it listens on port 53/UDP just like Bind/named—somewhat of an annoyance.
--- dns_transfer_hosts: - 127.0.0.1
---
- name: Restart systemd-resolved
ansible.builtin.systemd:
name: systemd-resolved
state: restarted
- name: restart named
ansible.builtin.systemd:
name: named
state: restarted
---
- name: install bind9
ansible.builtin.apt:
name:
- bind9
- bind9-dnsutils
state: present
- name: deploy named.conf.local
ansible.builtin.template:
src: named.conf.local.j2
dest: /etc/bind/named.conf.local
owner: root
group: bind
mode: 0640
notify: restart named
- name: deploy named.conf.options
ansible.builtin.template:
src: named.conf.options.j2
dest: /etc/bind/named.conf.options
owner: root
group: bind
mode: 0640
notify: restart named
- name: deploy zone file
ansible.builtin.copy:
src: "zone_file"
dest: "/var/cache/bind/db.{{ domain }}"
owner: root
group: bind
mode: 0640
notify: restart named
- name: Populate service facts
ansible.builtin.service_facts:
- name: Disable DNSStubListener if systemd-resolved is enabled
when: ansible_facts['services']['systemd-resolved.service']['status'] | default('not-found') != 'not-found'
block:
- name: Create /etc/systemd/resolved.conf.d
ansible.builtin.file:
path: /etc/systemd/resolved.conf.d
state: directory
- name: Create /etc/systemd/resolved.conf.d/nodnsstub.conf
ansible.builtin.copy:
dest: /etc/systemd/resolved.conf.d/nodnsstub.conf
content: "[Resolve]\nDNSStubListener=no\n"
notify: Restart systemd-resolved
- name: enable and start named
ansible.builtin.systemd:
name: named
state: started
enabled: true
ignore_errors: true
zone "{{ domain }}" IN {
type master;
file "db.{{ domain }}";
};
options {
directory "/var/cache/bind";
listen-on port 53 { any; };
listen-on-v6 port 53 { none; };
allow-query {
127.0.0.1;
{{ subnet }}.0/24;
};
allow-recursion {
127.0.0.1;
{{ subnet }}.0/24;
};
allow-transfer { {{ dns_transfer_hosts | join('; ') }}; };
forwarders { {{ dns_forwarders | join('; ') }}; };
//========================================================================
// If BIND logs error messages about the root key being expired,
// you will need to update your keys. See https://www.isc.org/bind-keys
//========================================================================
dnssec-validation auto;
{% if dnssec_exceptions is defined %}
validate-except
{
{% for domain in dnssec_exceptions %}
"{{ domain }}";
{% endfor %}
};
{% endif %}
};
The DHCP role is mostly the same as it was from my previous post Configuring Ubuntu and Debian as a router. This environment doesn’t use DHCP reservations, but this capability is there if needed.
---
- name: restart isc-dhcp-server
ansible.builtin.systemd:
name: isc-dhcp-server
state: restarted
---
- name: install isc-dhcp-server
ansible.builtin.apt:
name: isc-dhcp-server
state: present
- name: deploy dhcpd.conf
ansible.builtin.template:
src: dhcpd.conf.j2
dest: /etc/dhcp/dhcpd.conf
notify: restart isc-dhcp-server
- name: deploy dhcpd-reservations.conf
ansible.builtin.template:
src: dhcpd-reservations.conf.j2
dest: /etc/dhcp/dhcpd-reservations.conf
notify: restart isc-dhcp-server
when: dhcp_reservations is defined
- name: set interfaces in /etc/default/isc-dhcp-server
ansible.builtin.lineinfile:
path: /etc/default/isc-dhcp-server
regexp: '^(#)?INTERFACESv4'
line: "INTERFACESv4=\"eth1\""
notify: restart isc-dhcp-server
- name: enable and start isc-dhcp-server
ansible.builtin.systemd:
name: isc-dhcp-server
state: started
enabled: true
default-lease-time 600;
max-lease-time 7200;
authoritative;
subnet {{ subnet }}.0 netmask 255.255.255.0 {
range dynamic-bootp {{ subnet }}.150 {{ subnet }}.250;
option subnet-mask 255.255.255.0;
option broadcast-address {{ subnet }}.255;
option routers {{ subnet }}.1;
option domain-name "{{ domain }}";
option domain-name-servers {{ dhcp_dns_servers | join(', ') }};
{% if tftp_server is defined %}
filename "pxelinux.0";
next-server {{ tftp_server }};
{% endif %}
}
{% if dhcp_reservations is defined %}
include "/etc/dhcp/dhcpd-reservations.conf";
{% endif %}
{% for key, value in dhcp_reservations.items() %}
host {{ key }} {
hardware ethernet {{ value.mac }};
fixed-address {{ value.ip }};
{% if value.block_route is defined and value.block_route == true %}
option domain-name-servers 127.0.0.1;
option routers 0.0.0.0;
{% endif %}
}
{% endfor %}
keepalived is used to manage the router’s virtual IP (VIP), 192.168.20.1, which is the default gateway IP for the environment. If router1 goes offline, router2 will take over the default gateway VIP and handle routing for the environment. While this feature is unnecessary in a lab environment, I thought it would be interesting to test out for production use. The files used by the keepalived Ansible role for this projects are below:
---
keepalived_interface: "{{ ansible_facts['default_ipv4']['interface'] }}"
---
- name: restart keepalived
ansible.builtin.systemd:
name: keepalived
state: restarted
---
- name: Install keepalived
ansible.builtin.apt:
name: keepalived
state: present
- name: Create /etc/keepalived/keepalived.conf
ansible.builtin.template:
src: keepalived.conf.j2
dest: /etc/keepalived/keepalived.conf
mode: 0644
notify: restart keepalived
- name: enable and start keepalived
ansible.builtin.systemd:
name: keepalived
state: started
enabled: true
- name: Populate service facts
ansible.builtin.service_facts:
- name: Add firewalld rule for keepalived if enabled
ansible.posix.firewalld:
state: enabled
rich_rule: 'rule protocol value="vrrp" accept'
permanent: true
immediate: true
when: ansible_facts['services']['firewalld.service']['status'] | default('not-found') != 'not-found'
global_defs {
router_id {{ ansible_facts['hostname'] }}
}
vrrp_instance {{ vrrp_instance }} {
state {{ state }}
interface {{ keepalived_interface }}
virtual_router_id {{ virtual_router_id }}
priority {{ priority }}
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
{% for vip in vip_addresses %}
{{ vip }}
{% endfor %}
}
}
---
state: "{{ 'MASTER' if ansible_facts['hostname'][-1] | int == 1 else 'BACKUP' }}"
priority: "{{ 100 if ansible_facts['hostname'][-1] | int == 1 else 50 }}"
nftables is the firewall used on the router servers (the replacement for iptables). The firewall is configured to send traffic from the lab interface to the LAN interface, as well as allow inbound SSH traffic on the LAN interface (which might not be ideal if the LAN/WAN interface has a public IP address). This is similar to the setup I described in my Linux router post. The role takes in parameters from group_vars/routers and applies them to the /etc/nftables.conf template.
---
- name: restart nftables
ansible.builtin.systemd:
name: nftables
state: restarted
---
- name: remove ufw
ansible.builtin.apt:
name: ufw
state: absent
- name: install nftables
ansible.builtin.apt:
name: nftables
state: present
- name: Create /etc/nftables.conf
ansible.builtin.template:
src: nftables.conf.j2
dest: /etc/nftables.conf
mode: 0700
notify: restart nftables
- name: enable and start nftables
ansible.builtin.systemd:
name: nftables
state: started
enabled: true
- name: enable ip_forward
ansible.posix.sysctl:
name: net.ipv4.ip_forward
value: '1'
sysctl_file: /etc/sysctl.d/ip_forward.conf
#!/usr/sbin/nft -f
flush ruleset
define DEV_LAN = {{ nft_lan_dev }}
define DEV_LAB = {{ nft_lab_dev }}
define NET_LAB = {{ subnet }}.0/24
table ip global {
chain inbound_lan {
icmp type echo-request limit rate 5/second accept
# allow SSH
tcp dport ssh accept
}
chain inbound_lab {
# accepting ping (icmp-echo-request) for diagnostic purposes.
icmp type echo-request limit rate 5/second accept
# allow SSH, DHCP, and DNS
ip protocol . th dport vmap { tcp . 22 : accept, udp . 53 : accept, tcp . 53 : accept, udp . 67 : accept }
ip protocol vrrp ip daddr 224.0.0.0/8 accept
}
chain inbound {
type filter hook input priority 0; policy drop;
# Allow traffic from established and related packets, drop invalid
ct state vmap { established : accept, related : accept, invalid : drop }
# allow loopback traffic, anything else jump to chain for further evaluation
iifname vmap { lo : accept, $DEV_LAN : jump inbound_lan, $DEV_LAB : jump inbound_lab }
# the rest is dropped by the above policy
}
chain forward {
type filter hook forward priority 0; policy drop;
# Allow traffic from established and related packets, drop invalid
ct state vmap { established : accept, related : accept, invalid : drop }
# Forward traffic from the lab network to the LAN
meta iifname . meta oifname { $DEV_LAB . $DEV_LAN } accept
# the rest is dropped by the above policy
}
chain postrouting {
type nat hook postrouting priority 100; policy accept;
# masquerade private IP addresses
ip saddr $NET_LAB meta oifname $DEV_LAN counter masquerade
}
}
There isn’t much to the ssh_proxy_host role: it just pushes a .ssh/config file and .ssh/id_ed25519 key to the ansible user home directory on router1:
---
- name: Create .ssh/config
ansible.builtin.copy:
src: ssh_config
dest: /home/ansible/.ssh/config
owner: ansible
group: ansible
mode: 0600
- name: Create .ssh/id_ed25519
ansible.builtin.copy:
src: id_ed25519
dest: /home/ansible/.ssh/id_ed25519
owner: ansible
group: ansible
mode: 0600
no_log: true
Host * LogLevel QUIET StrictHostKeyChecking no UserKnownHostsFile /dev/null
You should, of course, take necessary steps to secure your SSH private key, such as not putting it in the project so it doesn’t get accidentally committed.
After these roles are created, the playbook can then be run with ansible-playbook -b -D routers.yml.
Configuring a MariaDB Galera cluster
For configuring MariaDB Galera with Ansible, I chose to create a single monolithic playbook (as opposed to using roles), because some tasks must be run on the first host in the cluster. The playbook creates and formats the data directory partitions, opens firewall ports with firewalld, and initializes (bootstraps) the Galera cluster. It also configures Galera servers to accept requests from HAProxy. When adding the disk creation part of the playbook, I ran into an issue in which the disk identifier /dev/sdb kept swapping between the data and OS disks. I managed to find a solution for this by setting a fact with the current ID based upon which device had a link ID of scsi-0QEMU_QEMU_HARDDISK_drive-scsi1. This was somewhat of a hack and I’d like to come up with a better solution at some point.
MariaDB was configured to allow health checks from HAProxy following the guide here from HAProxy.
- name: Set data_drive_id fact
ansible.builtin.set_fact:
data_drive_id: "{{ item.key }}"
loop: "{{ ansible_facts['devices'] | dict2items }}"
when: item.value.links.ids[0] | default('foobar') == "scsi-0QEMU_QEMU_HARDDISK_drive-scsi1"
- name: Create partition
community.general.parted:
device: "/dev/{{ data_drive_id }}"
number: 1
state: present
I also created a role that installs firewalld, and included that role in the playbook. Below are the code snippets for the galera.yml playbook and firewalld role.
---
- name: Install firewalld
ansible.builtin.apt:
name: firewalld
state: present
- name: Start and enable firewalld
ansible.builtin.systemd:
name: firewalld
state: started
enabled: true
---
- hosts: lb
- hosts: galera
roles:
- base
- firewalld
tasks:
- name: Collect IPs for the galera and lb host groups
ansible.builtin.set_fact:
galera_ips: >-
{%- set ips = [] -%}
{%- for host in groups['galera'] -%}
{%- set _ = ips.append(hostvars[host]['ansible_facts']['default_ipv4']['address']) -%}
{%- endfor -%}
{{ ips }}
lb_ips: >-
{%- set ips = [] -%}
{%- for host in groups['lb'] -%}
{%- set _ = ips.append(hostvars[host]['ansible_facts']['default_ipv4']['address']) -%}
{%- endfor -%}
{{ ips }}
delegate_to: localhost
- name: Install parted and xfsprogs
ansible.builtin.apt:
name:
- parted
- xfsprogs
state: present
- name: Set data_drive_id fact
ansible.builtin.set_fact:
data_drive_id: "{{ item.key }}"
loop: "{{ ansible_facts['devices'] | dict2items }}"
when: item.value.links.ids[0] | default('foobar') == "scsi-0QEMU_QEMU_HARDDISK_drive-scsi1"
- name: Create partition
community.general.parted:
device: "/dev/{{ data_drive_id }}"
number: 1
state: present
- name: Format partition
community.general.filesystem:
fstype: xfs
dev: "/dev/{{ data_drive_id }}1"
- name: Create mount point
ansible.builtin.file:
path: /var/lib/mysql
state: directory
- name: Re-gather facts
ansible.builtin.setup:
- name: Mount filesystem
ansible.posix.mount:
path: /var/lib/mysql
src: "UUID={{ ansible_facts['device_links']['uuids'][ data_drive_id + '1' ][0] }}"
fstype: xfs
state: mounted
- name: Add MySQL firewalld service
ansible.posix.firewalld:
service: mysql
state: enabled
permanent: true
immediate: true
- name: Add Galera firewalld ports
ansible.posix.firewalld:
port: "{{ item }}/tcp"
state: enabled
permanent: true
immediate: true
loop:
- 4444
- 4567
- 4568
- name: Install Galera/MariaDB packages
ansible.builtin.apt:
name:
- mariadb-server
- python3-pymysql
state: present
- name: Set parameters in /etc/mysql/mariadb.conf.d/50-server.cnf
community.general.ini_file:
path: /etc/mysql/mariadb.conf.d/50-server.cnf
section: mariadbd
option: "{{ item.option }}"
value: "{{ item.value }}"
loop:
- { option: 'bind-address', value: '0.0.0.0' }
- { option: 'proxy_protocol_networks', value: "{{ lb_ips | join(',') }}" }
- { option: 'server-id', value: "{{ ansible_facts['hostname'][-1] }}" }
- { option: 'log_slave_updates', value: 'ON' }
- { option: 'log-bin', value: '/var/lib/mysql/master-bin' }
- { option: 'log-bin-index', value: '/var/lib/mysql/master-bin.index' }
- { option: 'gtid_domain_id', value: "{{ ansible_facts['hostname'][-1] }}" }
- { option: 'wsrep_gtid_mode', value: 'ON' }
- { option: 'wsrep_gtid_domain_id', value: 0 }
- name: Set parameters in /etc/my.cnf.d/galera.cnf
community.general.ini_file:
path: /etc/mysql/mariadb.conf.d/60-galera.cnf
section: galera
option: "{{ item.option }}"
value: "{{ item.value }}"
loop:
- { option: 'wsrep_on', value: 'ON' }
- { option: 'wsrep_provider', value: '/usr/lib/galera/libgalera_smm.so' }
- { option: 'wsrep_cluster_name', value: 'matt_wsrep_cluster' }
- { option: 'wsrep_cluster_address', value: "gcomm://{{ galera_ips | join(',') }}" }
- { option: 'wsrep_node_name', value: "{{ ansible_facts['hostname'] }}" }
- { option: 'wsrep_node_address', value: "{{ ansible_facts['default_ipv4']['address'] }}" }
- { option: 'binlog_format', value: 'ROW' }
- { option: 'wsrep_slave_threads', value: 16 }
- { option: 'wsrep_retry_autocommit', value: 2 }
- name: Check if /var/lib/mysql/gvwstate.dat exists
ansible.builtin.stat:
path: /var/lib/mysql/gvwstate.dat
register: gvwstate_stat
ignore_errors: true
- name: Set new_cluster fact
ansible.builtin.set_fact:
galera_initialized: "{{ gvwstate_stat.stat.exists }}"
- name: ensure mariadb is stopped
ansible.builtin.systemd:
name: mariadb
state: stopped
when: galera_initialized == False
- hosts: galera1
gather_facts: false
tasks:
- name: run galera_new_cluster if needed
ansible.builtin.command: galera_new_cluster
when: galera_initialized == False
- hosts: galera
gather_facts: false
tasks:
- name: start and enable mariadb
ansible.builtin.systemd:
name: mariadb
state: started
enabled: true
- hosts: galera1
tasks:
- name: Add haproxy user
community.mysql.mysql_user:
name: haproxy
host: "{{ item }}"
login_unix_socket: /run/mysqld/mysqld.sock
loop: "{{ lb_ips }}"
Configuring an HAProxy Load Balancer
This setup utilizes HAProxy to handle requests to the MariaDB servers and the web servers. A virtual IP is shared between the two load balancer VMs, with Keepalived being used for automatic failover between the two nodes. With MariaDB, I set up read-write and read-only front-ends, so that writes only go to one Galera host, but reads can go to the other two hosts. I haven’t tested out this set up, however. The HTTPS front-end is in TCP mode, with Nginx on the web servers handling SSL termination instead of HAProxy.
The keepalived and firewalld roles are the same as from the other server types, though the parameters are different.
Below is a directory tree showing the list of files used by the load balancers:
├── group_vars │ ├── all │ ├── lb ├── inventory ├── lb.yml ├── roles │ ├── base │ │ └── tasks │ │ └── main.yml │ ├── firewalld │ │ └── tasks │ │ └── main.yml │ ├── haproxy │ │ ├── files │ │ │ └── haproxy.cfg │ │ ├── handlers │ │ │ └── main.yml │ │ ├── tasks │ │ │ └── main.yml │ ├── keepalived │ │ ├── defaults │ │ │ └── main.yml │ │ ├── handlers │ │ │ └── main.yml │ │ ├── tasks │ │ │ └── main.yml │ │ ├── templates │ │ │ └── keepalived.conf.j2 │ │ └── vars │ │ └── main.yml
First, the group_vars file for the load balancers, which only contains parameters for Keepalived:
--- virtual_router_id: 2 vip_addresses: - '192.168.20.35' vrrp_instance: 'HAPROXY'
Next, the playbook, which can be executed with ansible-playbook. It includes all of the roles for HAProxy.
---
- hosts: lb
roles:
- base
- firewalld
- keepalived
- haproxy
Next, the files for the haproxy. Note: I did not parameterize the haproxy.cfg file, because I didn’t see a need at the time.
---
- name: Restart haproxy
ansible.builtin.systemd:
name: haproxy
state: restarted
---
- name: Install haproxy
ansible.builtin.apt:
name: haproxy
state: present
- name: Add haproxy firewalld services
ansible.posix.firewalld:
service: "{{ item }}"
state: enabled
permanent: true
immediate: true
loop:
- http
- https
- mysql
- name: Add haproxy firewalld ports
ansible.posix.firewalld:
port: "{{ item }}/tcp"
state: enabled
permanent: true
immediate: true
loop:
- 3307
- name: Deploy /etc/haproxy/haproxy.cfg
ansible.builtin.copy:
src: haproxy.cfg
dest: /etc/haproxy/haproxy.cfg
notify: Restart haproxy
- name: Enable net.ipv4.ip_nonlocal_bind
ansible.posix.sysctl:
name: 'net.ipv4.ip_nonlocal_bind'
value: '1'
sysctl_file: /etc/sysctl.d/ip_nonlocal_bind
- name: Enable haproxy service
ansible.builtin.systemd:
name: haproxy
state: started
enabled: true
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode tcp
option tcplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend fe_mariadb_rw
bind *:3306
use_backend be_mysqld_rw
backend be_mysqld_rw
option mysql-check user haproxy
server galera1 192.168.20.20:3306 check send-proxy-v2
server galera2 192.168.20.21:3306 backup check send-proxy-v2
frontend fe_mariadb_ro
bind *:3307
use_backend be_mysqld_ro
backend be_mysqld_ro
option mysql-check user haproxy
server galera2 192.168.20.22:3306 check send-proxy-v2
server galera3 192.168.20.23:3306 check send-proxy-v2
server galera1 192.168.20.20:3306 backup check send-proxy-v2
frontend fe_https
bind 192.168.20.35:443
use_backend be_https
backend be_https
option httpchk
server www1 192.168.20.30:443 check-ssl verify none
server www2 192.168.20.31:443 check-ssl verify none
frontend fe_http
bind 192.168.20.35:80
use_backend be_http
backend be_http
server www1 192.168.20.30:80 check
server www2 192.168.20.31:80 check
Configuring Nginx and PHP-FPM
I will include my Nginx and PHP-FPM roles in this post, even though they aren’t particularly interesting. These roles don’t actually configure any virtual hosts or web applications; this is done by other roles not included here. Below are the directory structure and code snippets.
├── nginx │ ├── files │ │ ├── 01-php-fpm.conf │ │ └── nginx.conf │ ├── handlers │ │ └── main.yml │ └── tasks │ └── main.yml ├── php │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── vars │ └── main.yml
---
- hosts: www
roles:
- base
- firewalld
- php
- nginx
upstream php-fpm {
server unix:/run/php/php-fpm.sock;
}
The Nginx SSL settings use the best practices from the Mozilla SSL Configuration Generator as of 1/2026.
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log;
events {
worker_connections 1024;
}
http {
sendfile on;
tcp_nopush on;
types_hash_max_size 2048;
server_tokens off; # Recommended practice is to turn this off
include /etc/nginx/mime.types;
default_type application/octet-stream;
add_header Strict-Transport-Security "max-age=63072000" always;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ecdh_curve X25519:prime256v1:secp384r1;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers on;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m;
ssl_stapling on;
ssl_stapling_verify on;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log;
include /etc/nginx/conf.d/*.conf;
}
---
- name: restart nginx
ansible.builtin.systemd:
name: nginx
state: restarted
---
- name: Allow http and https
ansible.posix.firewalld:
service: "{{ item }}"
state: enabled
permanent: true
immediate: true
loop:
- http
- https
- name: Install Nginx
ansible.builtin.apt:
name: nginx
state: present
- name: Deploy /etc/nginx/nginx.conf
ansible.builtin.copy:
src: nginx.conf
dest: /etc/nginx/nginx.conf
notify: restart nginx
- name: Create /etc/nginx/conf.d/01-php-fpm.conf
ansible.builtin.copy:
src: 01-php-fpm.conf
dest: /etc/nginx/conf.d/01-php-fpm.conf
notify: restart nginx
- name: Gather the package facts
ansible.builtin.package_facts:
The vars file allows the use of this code with Debian 13 and Ubuntu 24.04. It is unnecessary if only one OS release will be used with this code.
---
php_os_to_ver:
noble: '8.3'
trixie: '8.4'
php_ver: "{{ php_os_to_ver[ansible_facts['distribution_release']] | default('8.4') }}"
---
- name: restart php-fpm
ansible.builtin.systemd:
name: php{{ php_ver }}-fpm
state: restarted
---
- name: install PHP packages
ansible.builtin.apt:
name:
- php{{ php_ver }}-apcu
- php{{ php_ver }}-common
- php{{ php_ver }}-curl
- php{{ php_ver }}-fpm
- php{{ php_ver }}-imagick
- php{{ php_ver }}-intl
- php{{ php_ver }}-mbstring
- php{{ php_ver }}-mysql
- php{{ php_ver }}-xml
state: present
- name: create /var/log/php-fpm
ansible.builtin.file:
path: /var/log/php-fpm
state: directory
owner: www-data
group: www-data
mode: 0700
- name: Set parameters in /etc/php/{{ php_ver }}/fpm/pool.d/www.conf
community.general.ini_file:
path: /etc/php/{{ php_ver }}/fpm/pool.d/www.conf
section: www
option: "{{ item.option }}"
value: "{{ item.value }}"
loop:
- { option: 'pm.max_children', value: '50' }
- { option: 'pm.start_servers', value: '5' }
- { option: 'pm.min_spare_servers', value: '5' }
- { option: 'pm.max_spare_servers', value: '35' }
- { option: 'php_admin_value[error_log]', value: '/var/log/php-fpm/www-error.log' }
- { option: 'php_admin_flag[log_errors]', value: 'on' }
notify: restart php-fpm
- name: Set parameters in /etc/php/{{ php_ver }}/fpm/php.ini
community.general.ini_file:
path: /etc/php/{{ php_ver }}/fpm/php.ini
section: PHP
option: "{{ item.option }}"
value: "{{ item.value }}"
loop:
- { option: 'upload_max_filesize', value: '5M' }
notify: restart php-fpm
Conclusion
This was a meandering post (like a lot of my posts are). For me, though, it was a means by which I could improve my Terraform skills, attempt to do some more complex tasks with inline templates in Ansible, and dive deeper into Proxmox VE. As always, if you read any of this, thank you!