Configure a MariaDB Galera Cluster with Puppet

One of the ways in which you can have multi-master replication with high-availability in MySQL is to set up a Galera cluster. As an interesting project in order to learn more about Galera, I decided to see if it would be possible to build out a cluster using automation with Puppet and the MySQL Forge module. I was able to test this successfully using either MariaDB 5.5 or 10.0 on CentOS 6. In this example, I configured this on three systems, which is the recommended minimum number of cluster members for Galera.

The first thing you will need when installing this is the MariaDB 5.5 or 10.0 Yum repository. To configure this, create a class similar to below:

class galera::repo {

  yumrepo { 'mariadb55':
    baseurl  => "http://yum.mariadb.org/5.5/centos${::osmajrelease}-amd64/",
    descr    => 'MariaDB 5.5',
    gpgcheck => 0,
  }

}

The only package that you need to have Puppet install with a specific package resource is the Galera package itself. The other packages are installed by the MySQL Forge module. I’ve created an “install” class to manage this single resource below:

class galera::install {

  package { 'galera':
    ensure  => present,
    require => Yumrepo['mariadb55']
  }

}

You need to ensure that the puppetlabs/mysql module is available on your Puppet master. If you’re using R10K to manage your modules, just add the following to your Puppetfile:

mod 'puppetlabs/mysql'

If not, you would install it by running sudo puppet module install puppetlabs-mysql. If you’re using the puppetlabs/firewall module to manage iptables on your nodes, you will also need to create a class that opens the necessary TCP ports for Galera:

class galera::firewall {

  firewall { '102 open ports for galera cluster':
    state  => 'NEW',
    dport  => ['3306', '4567', '4444'],
    proto  => 'tcp',
    action => 'accept',
  }

}

Next, create a class for your parameters that are read in from Hiera:

class galera::params {

  $galera_cluster = hiera(galera_cluster)
  $galera_servers = any2array(hiera(galera_servers))
  $sst_pw         = hiera(sst_pw)
  $root_password  = hiera(mysql_root_pw)

}

Before going on to the next step, I should explain the purpose of the above parameters. $galera_cluster is the name assigned to the cluster itself. If you would like, you can can leave this out altogether and just set it statically in your configuration file. However, if you are going to have multiple Galera clusters that use this module, you will need this parameter. $galera_servers is an array of servers in the cluster excluding the IP of the server itself. So if you have three servers in your cluster with IPs of 192.168.1.50, 192.168.1.51, and 192.168.1.52, in host hiera YAML file for 192.168.1.50, you will specify galera_servers as the following:

galera_servers:
  - '192.168.1.51'
  - '192.168.1.52'

And so forth for 192.168.1.51 and 192.168.1.52. $root_password is of course the MySQL root account password. Since this is not hashed and is stored as plain text, I recommend setting up EYAML to encrypt it. More information on that can be found here. Finally, $sst_pw is the plain-text password for the SST user that Galera uses to sync data between the nodes in the cluster. Simply using the root account as the SST user would work as well, but for security purposes I recommend setting up a separate SST account that only has the permissions required to perform replication.

Next, create a configuration class that sets up the MariaDB client and server, creates the required account, and configures the Galera cluster:

class galera::config (
  $galera_cluster = $galera::params::galera_cluster,
  $galera_servers = $galera::params::galera_servers,
  $sst_pw         = $galera::params::sst_pw,
  $root_password  = $galera::params::root_password,
) inherits galera::params {

  class { '::mysql::client':
    package_name => 'MariaDB-client',
    require      => Yumrepo['mariadb55'],
  }

  class { '::mysql::server':
    package_name            => 'MariaDB-Galera-server',
    remove_default_accounts => true,
    service_enabled         => true,
    service_manage          => true,
    service_name            => 'mysql',
    root_password           => $root_password,
    override_options        => {
      'mysqld' => {
        'bind-address'           => '0.0.0.0',
        'pid-file'               => '/var/lib/mysql/mysqld.pid',
        'binlog-format'          => 'ROW',
        'default-storage-engine' => 'innodb',
      },
    },
    require                 => Yumrepo['mariadb55'],
  }

  ::mysql_user { 'sst@%':
    ensure        => present,
    password_hash => mysql_password($sst_pw),
  }

  ::mysql_grant { 'sst@%/*.*':
    ensure     => present,
    options    => ['GRANT'],
    privileges => [ 'RELOAD', 'LOCK TABLES', 'REPLICATION CLIENT' ],
    table      => '*.*',
    user       => 'sst@%',
  }

  file { '/etc/my.cnf.d/server.cnf':
    ensure  => file,
    content => template('galera/server.cnf.erb'),
    owner   => 'mysql',
    group   => 'mysql',
    mode    => '0640',
    require => [ Package['mysql-server'], Package['galera'], ],
  }

}

First, this class reads in the necessary parameters from the galera::params class. It then calls the MySQL Forge module to install and configure the MariaDB client and server. The SST user is added and configured with the minimum privileges required. Finally, the template for /etc/my.cnf.d/server.cnf, which contains the MariaDB/Galera specific settings, is pushed out with the appropriate permissions. If you have read the code closely, you might notice that this resource is missing a “notify => Service[‘mysql’]” that refreshes the service each time the file is changed. This is because in order to start a Galera cluster, you will need to bootstrap the first server in the cluster; “notify => Service[‘mysql’]” will produce an error on the first server because it will call /etc/init.d/mysql restart, which will in turn error out. For the purposes of this example, I’ve started up the cluster manually on each server. For the first system, after running Puppet, you would run sudo service mysql bootstrap. For each subsequent system, after running Puppet, you would execute sudo service mysql restart. However, if you’re super-obsessed with making the whole setup automated, you can add code that does this. First add the following to your params class:

  $bootstrap = hiera(mysql_bootstrap, false)

  $init_command = $bootstrap ? {
    true  => 'service mysql stop && service mysql bootstrap',
    false => 'service mysql restart',
  }

For the host hiera YAML file for the first server in the cluster, add this parameter: mysql_bootstrap: true. Then add the below to your configuration class:

# Additional parameter at the top
  $init_command   = $galera::params::init_command,
  ...
  exec { 'init_galera':
    command => $init_command,
    unless  => 'test -f /var/lib/mysql/galera.cache',
    path    => '/bin:/sbin:/usr/bin:/usr/sbin',
    require => [ Service['mysqld'], Mysql_user['sst@%'], Mysql_grant['sst@%/*.*'], File['/etc/my.cnf.d/server.cnf'], ],
  }

Again, I don’t recommend this approach for building a production system, but I wanted to show that it’s possible.

Next, create a template for /etc/my.cnf.d/server.cnf. I’ve placed this particular template in galera/templates/server.cnf.

[mariadb]
query_cache_size=0
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://<%= @galera_servers.join(',') %>
wsrep_cluster_name='<%= @galera_cluster %>'
wsrep_node_address='<%= @ipaddress %>'
wsrep_node_name='<%= @hostname %>'
wsrep_sst_method=rsync
wsrep_sst_auth=sst:<%= @sst_pw %>

Finally, create a base manifest that includes all of the above classes:

class galera {

  include galera::config
  include galera::firewall
  include galera::install
  include galera::repo

}

You can now commit the code to repository that contains your Puppet code, include the class in your server’s manifest, and execute Puppet on each one of the Galera nodes. If you’ve chosen to manage the service manually, you won’t have to run it on each one in a particular order. To bootstrap the cluster, on any one of the nodes execute sudo service mysql bootstrap. On all remaining servers, execute sudo service mysql restart. You should receive results from the below commands similar to below on any of the servers:

matthew@mariadb1:~$ sudo mysql -e "show status like 'wsrep_connected'\G"
*************************** 1. row ***************************
Variable_name: wsrep_connected
        Value: ON
matthew@mariadb1:~$ sudo mysql -e "show status like 'wsrep_incoming_addresses'\G;"
*************************** 1. row ***************************
Variable_name: wsrep_incoming_addresses
        Value: 192.168.87.53:3306,192.168.87.56:3306,192.168.87.52:3306

Your Galera cluster is now ready to accept connections. When you insert data on any one of the servers, it should replicate to the other servers in the cluster. In future blog posts, I will explore different methods for load balancing a cluster.

Author’s note: in a previous revision of this post, I had separate parameters for the SST user’s password and password hash. Now, having learned about the mysql_password() function in the puppetlabs/mysql Forge module, I’ve removed the password hash parameter to simplify the directions somewhat.

Monitoring Windows with Nagios and Exported Resources in Puppet

Suppose you would like to monitor your Windows systems alongside your Linux systems in Nagios, but want to avoid configuring every single one manually. Also, you want these systems to be removed from Nagios automatically when you decommission them. Puppet, NSClient++, and Chocolatey provide an excellent means for accomplishing this. In this post I will explain how I got this up and running on my CentOS 6 Nagios server.

This article assumes that you’re already using PuppetDB with exported resources in your Puppet environment. As this is outside the scope of this guide, I’m not going to go into how to set this up. However, it is fairly simple to do, especially with the puppetlabs/puppetdb Forge module. If you’re using Puppet Enterprise, you’re likely already using it. This also assumes that you’re using Puppet to manage your Nagios server. In the class that manages your Nagios server, you will need these two lines, to collect the exported resources from your nodes:

  Nagios_host    <<||>> { notify => Service['nagios'] }
  Nagios_service <<||>> { notify => Service['nagios'] }

You will need to ensure that you have Chocolatey available as a package provider for your Windows nodes. My previous post explains how to configure Chocolatey with Puppet. Chocolatey will be used to install NSClient++, which is a monitoring agent for Windows that includes a NRPE server with which Nagios can interface. The documentation on NSClient++ can be found here.

To prepare your Nagios server for managing Windows hosts, first ensure that you have a Nagios command that can initiate NRPE commands against Windows clients. Because the check_nrpe plugin will throw SSL errors against NSClient++, I’ve defined a nagios_command resource like below for Windows systems that does not use SSL:

  nagios_command { 'check_nrpe_win':
    command_name => 'check_nrpe_win',
    command_line => '$USER1$/check_nrpe -H $HOSTADDRESS$ -n -c $ARG1$ -t 30',
  }

Next, define a standard Nagios hostgroup for Windows that includes basic items to check, such as disk space. You may need to alter this for your own environment. I’ve put this under its own class, nagios::server::hostgroup_windows.

class nagios::server::hostgroup_windows {

  nagios_hostgroup { 'windows_hosts':
    alias => 'Windows Hosts',
  }

  nagios_service { 'check_win_cpu':
    check_command       => 'check_nrpe_win!check_cpu',
    use                 => 'generic-service',
    hostgroup_name      => 'windows_hosts',
    notification_period => '24x7',
    service_description => 'CPU Load',
    max_check_attempts  => 3,
  }

  nagios_service { 'check_win_mem':
    check_command       => 'check_nrpe_win!alias_memory',
    use                 => 'generic-service',
    hostgroup_name      => 'windows_hosts',
    notification_period => '24x7',
    service_description => 'Memory Usage',
    max_check_attempts  => 3,
  }

  nagios_service { 'check_win_drives':
    check_command       => 'check_nrpe_win!alias_space',
    use                 => 'generic-service',
    hostgroup_name      => 'windows_hosts',
    notification_period => '24x7',
    service_description => 'Disk Usage',
    max_check_attempts  => 3,
  }

  nagios_service { 'check_rdp':
    check_command       => 'check_rdp',
    use                 => 'generic-service',
    hostgroup_name      => 'windows_hosts',
    notification_period => '24x7',
    service_description => 'RDP',
    max_check_attempts  => 3,
  }

}

Next, create a template for nsclient.ini, which is the main configuration file for NSClient++. The one I’ve created for this guide is relatively simple; you can refer to the NSClient++ documentation for more options. The template, in my case, is located at nagios/templates/nsclient.ini.erb.

[/modules]
CheckSystem=enabled
CheckDisk=enabled
CheckExternalScripts=enabled
NRPEServer=enabled

[/settings/default]
allowed hosts = your_nagios_server_IP

[/settings/NRPE/server]
use ssl = false
allow arguments = true
allow nasty characters = false
port = 5666

[/settings/external scripts/alias]
alias_memory = check_memory "warn=free < 10%" "crit=free < 5%"
alias_space = check_drivesize "warn=free < 10%" "crit=free < 5%" drive=*

Finally, create a class that installs NSClient++ and manages the service on the Windows systems, and exports the nagios_host resource to the Nagios server.

class nagios::windows {

  package { 'nscp':
    ensure   => present,
  }

  service { 'nscp':
    ensure  => running,
    enable  => true,
    require => Package['nscp'],
  }

  file { 'C:/Program Files/NSClient++/nsclient.ini':
    ensure  => file,
    content => template('nagios/nsclient.ini.erb'),
    notify  => Service['nscp'],
  }

  @@nagios_host { $::fqdn:
    ensure             => present,
    alias              => $::hostname,
    address            => $::ipaddress,
    hostgroups         => 'windows_hosts',
    use                => 'generic-host',
    max_check_attempts => 3,
    check_command      => 'check_ping!100.0,20%!500.0,60%',
  }

}

Once you have committed these changes to your Puppet repository, you would then include nagios::windows in the catalog for your Windows host and trigger a Puppet run on it to install NSClient++, enable the service, and export the resource to your Nagios server. Then, execute a Puppet run on your Nagios server. The result should hopefully be like below.

Nagios Windows

And now your host should be available in Nagios for monitoring.

One additional note: if you include the Windows Hosts host group in the catalog of your Nagios server, you must then have at least one Windows nagios_host exported. Otherwise, Nagios will not start, as it does not allow empty host groups.

How I Configured Chocolatey with Puppet

Chocolatey is an apt-like package manager for Windows (https://chocolatey.org) that greatly simplifies the installation of software, especially with Puppet (versus having to call MSI packages with obscure switches that may or may not work). Many of my future tutorials that involve managing Windows with Puppet will require that Chocolatey be configured. Here I will explain how I’ve gotten Chocolatey up and running on Windows with Puppet.

This guide assumes that you have Puppet already installed on Windows. If you’re familiar with installing Puppet on Linux systems, it’s about the same for Windows. You would download and install the MSI package from the link here and afterwards sign the certificate request on your master server. You will also need to install the chocolatey/chocolatey and puppetlabs/powershell Forge modules. If you’re using R10K to manage your modules, just add the following to your Puppetfile:

mod 'puppetlabs/powershell'
mod 'chocolatey/chocolatey'

Otherwise, just install them using sudo puppet module install puppetlabs-powershell and sudo puppet module install chocolatey-chocolatey. Once these have been installed, I would then recommend defining some default parameters for the package and file resources at the top scope, in site.pp.

if $::kernel == 'windows' {
  File {
    owner              => undef,
    group              => undef,
    source_permissions => 'ignore',
  }

  Package {
    provider => 'chocolatey',h
  }
}

These tell Puppet not to attempt to apply *nix-style permissions to Windows file resources and to use Chocolatey as the default provider for packages. Now create a class that installs Chocolatey itself. Since the chocolatey/chocolatey module currently is not capable of installing Chocolatey, your class will need to install it using an exec resource. I’ve named my class windows::chocolatey and have created it under windows/manifests/chocolatey.pp.

class windows::chocolatey {

  exec { 'install_chocolatey':
    command  => "set-executionpolicy unrestricted -force -scope process; (iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1')))>\$null 2>&1",
    provider => 'powershell',
    creates  => 'C:/ProgramData/chocolatey',
  }

}

The above command for installing Chocolatey is from Chocolatey installation guide. It’s possible that this may change in the near future. Therefore, you should refer to that page before setting up your exec. If this is for a lab or evaluation environment, you may also want to have Puppet use Chocolatey to keep up to date with the latest Chocolatey release.

  package { 'chocolatey':
    ensure  => latest,
    require => Exec['install_chocolatey'],
  }

Once you have created this module and committed it to the repository containing your custom modules, you would then include the Chocolatey class (windows::chocolatey) in the catalog for your Windows node and initiate a Puppet run on it to apply the class. Now you can use Puppet to manage packages that have been made available by contributors to the Chocolatey project. A full listing can be found here. To manage a particular package with Puppet, include it the same way you would as with a package for Linux:

class windows::git {

  package { 'git':
    ensure => installed,
  }

}

Why You Should Always Test Out New Puppet Forge Modules Before Upgrading

Image

Modules from the Puppet Forge are a great resource. They simplify getting your systems managed under Puppet and save you from having to start from scratch in many cases. I have the utmost gratitude to the individuals who write or contribute to these modules. With that said, the people who contribute to them will sometimes introduce new features that diverge significantly from previous way, causing things to break. Because of this, in my lab environment I set most of the modules in my Puppetfile to ensure that the latest version is installed at all times, so that I can spot any potential any issues. DO NOT DO THIS IN A PRODUCTION ENVIRONMENT, HOWEVER!

mod 'puppetlabs/concat', :latest
mod 'puppetlabs/firewall', :latest
mod 'puppetlabs/git', :latest

Prior to upgrading a version of a Forge module, you should review all of the release notes all the way back to the version you’re upgrading from. It should be also noted you should apply this level of caution to “Puppet Labs-supported” modules as well as those created by community members. One example I recently encountered was with the puppetlabs/puppetdb module. As noted in the change log, the latest version, 5.0.0, has the manage_pg_repo parameter set to true by default, so it will install a version of Postgres from the Postgres Yum repo, while the previous version, 4.3.0, had this off by default, and thus used CentOS’s Postgres package. A demonstration below in a Vagrant box running CentOS 6 shows the result of me blindly upgrading.

First, I create a Vagrantfile that builds the initial environment:

$shell = <<SHELL
puppet module install puppetlabs/puppetdb --version=4.3.0 \
--modulepath=/vagrant/modules
SHELL

Vagrant.configure('2') do |config|
  config.vm.box = 'puppetlabs/centos-6.6-64-puppet'
  
  config.vm.provision "shell", inline: $shell

  config.vm.provision "puppet" do |puppet|
    puppet.module_path = "modules"
  end

end

In my Vagrant project folder, I create a simple manifests/default.pp manifest that just includes the PuppetDB module:

node default {

  include puppetdb

}

Issue a vagrant up and watch while it installs a PuppetDB server running on the OS’s Postgres package. Once complete vagrant ssh in and start up PuppetDB. It should be functioning properly.

Vagrant1

Now I’m going to break PuppetDB with the updated module. First, run sudo puppet module install puppetlabs/puppetdb –modulepath=/etc/puppet/modules, so that it installs the module inside the environment. Then run a sudo puppet apply /vagrant/manifests/default.pp to apply the manifest you created when provisioning the VM.

PuppetDB Module Failures

Guess what? If this were a production system, you’re going to be fixing PuppetDB. The module installed Postgresql 9.4 alongside the OS’s 8.x Postgres, causing a conflict and preventing it from starting. It’s quite apparent that the individuals who put this in did not actually test for this scenario. They probably assume that everyone installing their module would be doing so with a clean system, which is often not the case.

The lesson here: Puppet Forge modules are an excellent resource, but you should research and, if possible, test the changes made in the latest versions before upgrading your production environment. Otherwise, there’s always the possibility you may break something even more critical than PuppetDB.

How to Setup an OpenVPN Server with Puppet

Recently I decided that I wanted to setup an OpenVPN server so that I could access my lab environment at home over an encrypted connection. In order to automate the configuration of this, I decided to use the luxflux/openvpn Forge module. These directions are for a CentOS/Enterprise Linux 6 system.

One thing I noticed in the directions on the GitHub page for the site is that it says that there is a class for the module called openvpn::servers, which supposedly allows you to setup the module using class parameters in Hiera. This class does not actually exist in the current version of the module, which I quickly learned after attempting to add it in my ENC. Instead, I had to create my own module with its own set of parameters that called the openvpn::server defined type to configure my VPN server.

First, I created a module called site_openvpn in my set of custom modules, with a params class that contains the list of parameters to be read in from Hiera.

class site_openvpn::params {

  $vpn_clients = hiera(vpn_clients)
  $vpn_lcl_net = hiera(vpn_lcl_net)
  $vpn_rem_ip = hiera(vpn_rem_ip, $::ipaddress)
  $vpn_subnet = hiera(vpn_subnet)

}

The meaning of the above parameters are:

  • $vpn_clients: this is an array of your VPN client names. Example: test_laptop.
  • $vpn_lcl_net: the subnet of your LAN, such as your home or office network
  • $vpn_rem_ip: the IP your clients will connect to. This may be different than the IP of your VPN server if it resides behind a NAT.
  • $vpn_subnet: the network your VPN clients will connect on.

Create a firewall manifest that opens the necessary ports for OpenVPN:

class site_openvpn::firewall (
$vpn_subnet = $site_openvpn::params::vpn_subnet,
) inherits site_openvpn::params {

  firewall { '120 allow 1194/TCP for OpenVPN':
    state  => 'NEW',
    dport  => '1194',
    proto  => 'tcp',
    action => 'accept',
  }

  firewall { '121 allow TUN connections':
    chain   => 'INPUT',
    proto   => 'all',
    iniface => 'tun+',
    action  => 'accept',
  }

  firewall { '122 forward TUN forward connections':
    chain   => 'FORWARD',
    proto   => 'all',
    iniface => 'tun+',
    action  => 'accept',
  }

  firewall { '123 tun+ to eth0':
    chain    => 'FORWARD',
    proto    => 'all',
    iniface  => 'tun+',
    outiface => 'eth0',
    state    => [ 'ESTABLISHED', 'RELATED' ],
    action   => 'accept',
  }

  firewall { '124 eth0 to tun+':
    chain    => 'FORWARD',
    proto    => 'all',
    iniface  => 'eth0',
    outiface => 'tun+',
    state    => [ 'ESTABLISHED', 'RELATED' ],
    action   => 'accept',
  }

  firewall { '125 POSTROUTING':
    table    => 'nat',
    proto    => 'all',
    chain    => 'POSTROUTING',
    source   => "${vpn_subnet}/24",
    outiface => 'eth0',
    jump     => 'MASQUERADE',
  }

}

Create a init.pp manifest that calls the luxflux/openvpn with your parameters. Note, for dhcp-option DNS, I am using the OpenVPN server itself as a caching/forwarding DNS server, because I’ve run into problems with the DNS traffic forwarding correctly. You can point this to another server, however.

class site_openvpn (
  $vpn_clients = $site_openvpn::params::vpn_clients,
  $vpn_lcl_net = $site_openvpn::params::vpn_lcl_net,
  $vpn_rem_ip  = $site_openvpn::params::vpn_rem_ip,
  $vpn_subnet  = $site_openvpn::params::vpn_subnet,
) inherits site_openvpn::params {

  include ::openvpn
  include site_openvpn::firewall

  ::openvpn::server { $::fqdn:
    country      => 'US',
    province     => 'NC',
    city         => 'Your City',
    organization => $::domain,
    email        => "root@${::domain}",
    server       => "$vpn_subnet 255.255.255.0",
    route        => [
      "${vpn_lcl_net} 255.255.255.0",
    ],
    push         => [
      "route ${vpn_lcl_net} 255.255.255.0",
      "redirect-gateway def1 bypass-dhcp",
      "dhcp-option DNS $::ipaddress_tun0",
    ],
    c2c          => true,
  }

  # Create the VPN client configs
  ::openvpn::client { $vpn_clients:
    server      => $::fqdn,
    remote_host => $vpn_rem_ip,
  }

}

You will also need to set net.ipv4.ip_forward to 1 in /etc/sysctl.conf, so that the traffic can be forwarded through your OpenVPN server to the local LAN. I have a separate class that manages this, but you can include the code in your site_openvpn manifest as well:

sysctl/manifests/init.pp:

class sysctl {

  file { '/etc/sysctl.conf':
    ensure => file,
  }

  exec { 'sysctl -p':
    command     => '/sbin/sysctl -p',
    refreshonly => true,
    subscribe   => File['/etc/sysctl.conf'],
  }

}

sysctl/manifests/ip_forward.pp

class sysctl::ip_forward {

  include sysctl

  augeas { 'sysctl_ip_forward':
    context => '/files/etc/sysctl.conf',
    onlyif  => "get net.ipv4.ip_forward == '0'",
    changes => "set net.ipv4.ip_forward '1'",
    notify  => Exec['sysctl -p'],
  }

}

Finally, create the hiera parameters for the host. I’m creating these in a .yaml file for the host:

---
vpn_rem_ip: '192.168.1.67'
vpn_subnet: '192.168.100.0'
vpn_lcl_net: '192.168.1.0'
vpn_clients:
  - 'iphone6'
  - 'macbook'

Include the site_openvpn and sysctl::ip_forward modules in your node definition or ENC if applicable, and run Puppet on the OpenVPN server. This will install OpenVPN, configure the server, and generate the client configs.

Once installation is complete, the .ovpn client config files will be accessible at /etc/openvpn//download-configs. You can SCP these individual files to import into a Mac (such as Tunnelblick), Windows, Linux, or mobile OpenVPN client.

Author’s note: in my previous post, I forgot to include “proto => ‘all'” in some of the firewall resources, which would cause issues with UDP traffic such as DNS forwarding correctly.

Introduction to My Blog

Welcome to my blog! Here I will post guides to and share my experiences with various Linux and Windows systems administration tasks. Please feel free to share any suggestions, as I realize my method may not always be the best one.