One of the ways in which you can have multi-master replication with high-availability in MySQL is to set up a Galera cluster. As an interesting project in order to learn more about Galera, I decided to see if it would be possible to build out a cluster using automation with Puppet and the MySQL Forge module. I was able to test this successfully using either MariaDB 5.5 or 10.0 on CentOS 6. In this example, I configured this on three systems, which is the recommended minimum number of cluster members for Galera.

The first thing you will need when installing this is the MariaDB 5.5 or 10.0 Yum repository. To configure this, create a class similar to below:

class galera::repo {

  yumrepo { 'mariadb55':
    baseurl  => "http://yum.mariadb.org/5.5/centos${::osmajrelease}-amd64/",
    descr    => 'MariaDB 5.5',
    gpgcheck => 0,
  }

}

The only package that you need to have Puppet install with a specific package resource is the Galera package itself. The other packages are installed by the MySQL Forge module. I’ve created an “install” class to manage this single resource below:

class galera::install {

  package { 'galera':
    ensure  => present,
    require => Yumrepo['mariadb55']
  }

}

You need to ensure that the puppetlabs/mysql module is available on your Puppet master. If you’re using R10K to manage your modules, just add the following to your Puppetfile:

mod 'puppetlabs/mysql'

If not, you would install it by running sudo puppet module install puppetlabs-mysql. If you’re using the puppetlabs/firewall module to manage iptables on your nodes, you will also need to create a class that opens the necessary TCP ports for Galera:

class galera::firewall {

  firewall { '102 open ports for galera cluster':
    state  => 'NEW',
    dport  => ['3306', '4567', '4444'],
    proto  => 'tcp',
    action => 'accept',
  }

}

Next, create a class for your parameters that are read in from Hiera:

class galera::params {

  $galera_cluster = hiera(galera_cluster)
  $galera_servers = any2array(hiera(galera_servers))
  $sst_pw         = hiera(sst_pw)
  $root_password  = hiera(mysql_root_pw)

}

Before going on to the next step, I should explain the purpose of the above parameters. $galera_cluster is the name assigned to the cluster itself. If you would like, you can can leave this out altogether and just set it statically in your configuration file. However, if you are going to have multiple Galera clusters that use this module, you will need this parameter. $galera_servers is an array of servers in the cluster excluding the IP of the server itself. So if you have three servers in your cluster with IPs of 192.168.1.50, 192.168.1.51, and 192.168.1.52, in host hiera YAML file for 192.168.1.50, you will specify galera_servers as the following:

galera_servers:
  - '192.168.1.51'
  - '192.168.1.52'

And so forth for 192.168.1.51 and 192.168.1.52. $root_password is of course the MySQL root account password. Since this is not hashed and is stored as plain text, I recommend setting up EYAML to encrypt it. More information on that can be found here. Finally, $sst_pw is the plain-text password for the SST user that Galera uses to sync data between the nodes in the cluster. Simply using the root account as the SST user would work as well, but for security purposes I recommend setting up a separate SST account that only has the permissions required to perform replication.

Next, create a configuration class that sets up the MariaDB client and server, creates the required account, and configures the Galera cluster:

class galera::config (
  $galera_cluster = $galera::params::galera_cluster,
  $galera_servers = $galera::params::galera_servers,
  $sst_pw         = $galera::params::sst_pw,
  $root_password  = $galera::params::root_password,
) inherits galera::params {

  class { '::mysql::client':
    package_name => 'MariaDB-client',
    require      => Yumrepo['mariadb55'],
  }

  class { '::mysql::server':
    package_name            => 'MariaDB-Galera-server',
    remove_default_accounts => true,
    service_enabled         => true,
    service_manage          => true,
    service_name            => 'mysql',
    root_password           => $root_password,
    override_options        => {
      'mysqld' => {
        'bind-address'           => '0.0.0.0',
        'pid-file'               => '/var/lib/mysql/mysqld.pid',
        'binlog-format'          => 'ROW',
        'default-storage-engine' => 'innodb',
      },
    },
    require                 => Yumrepo['mariadb55'],
  }

  ::mysql_user { 'sst@%':
    ensure        => present,
    password_hash => mysql_password($sst_pw),
  }

  ::mysql_grant { 'sst@%/*.*':
    ensure     => present,
    options    => ['GRANT'],
    privileges => [ 'RELOAD', 'LOCK TABLES', 'REPLICATION CLIENT' ],
    table      => '*.*',
    user       => 'sst@%',
  }

  file { '/etc/my.cnf.d/server.cnf':
    ensure  => file,
    content => template('galera/server.cnf.erb'),
    owner   => 'mysql',
    group   => 'mysql',
    mode    => '0640',
    require => [ Package['mysql-server'], Package['galera'], ],
  }

}

First, this class reads in the necessary parameters from the galera::params class. It then calls the MySQL Forge module to install and configure the MariaDB client and server. The SST user is added and configured with the minimum privileges required. Finally, the template for /etc/my.cnf.d/server.cnf, which contains the MariaDB/Galera specific settings, is pushed out with the appropriate permissions. If you have read the code closely, you might notice that this resource is missing a “notify => Service[‘mysql’]” that refreshes the service each time the file is changed. This is because in order to start a Galera cluster, you will need to bootstrap the first server in the cluster; “notify => Service[‘mysql’]” will produce an error on the first server because it will call /etc/init.d/mysql restart, which will in turn error out. For the purposes of this example, I’ve started up the cluster manually on each server. For the first system, after running Puppet, you would run sudo service mysql bootstrap. For each subsequent system, after running Puppet, you would execute sudo service mysql restart. However, if you’re super-obsessed with making the whole setup automated, you can add code that does this. First add the following to your params class:

  $bootstrap = hiera(mysql_bootstrap, false)

  $init_command = $bootstrap ? {
    true  => 'service mysql stop && service mysql bootstrap',
    false => 'service mysql restart',
  }

For the host hiera YAML file for the first server in the cluster, add this parameter: mysql_bootstrap: true. Then add the below to your configuration class:

# Additional parameter at the top
  $init_command   = $galera::params::init_command,
  ...
  exec { 'init_galera':
    command => $init_command,
    unless  => 'test -f /var/lib/mysql/galera.cache',
    path    => '/bin:/sbin:/usr/bin:/usr/sbin',
    require => [ Service['mysqld'], Mysql_user['sst@%'], Mysql_grant['sst@%/*.*'], File['/etc/my.cnf.d/server.cnf'], ],
  }

Again, I don’t recommend this approach for building a production system, but I wanted to show that it’s possible.

Next, create a template for /etc/my.cnf.d/server.cnf. I’ve placed this particular template in galera/templates/server.cnf.

[mariadb]
query_cache_size=0
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://<%= @galera_servers.join(',') %>
wsrep_cluster_name='<%= @galera_cluster %>'
wsrep_node_address='<%= @ipaddress %>'
wsrep_node_name='<%= @hostname %>'
wsrep_sst_method=rsync
wsrep_sst_auth=sst:<%= @sst_pw %>

Finally, create a base manifest that includes all of the above classes:

class galera {

  include galera::config
  include galera::firewall
  include galera::install
  include galera::repo

}

You can now commit the code to repository that contains your Puppet code, include the class in your server’s manifest, and execute Puppet on each one of the Galera nodes. If you’ve chosen to manage the service manually, you won’t have to run it on each one in a particular order. To bootstrap the cluster, on any one of the nodes execute sudo service mysql bootstrap. On all remaining servers, execute sudo service mysql restart. You should receive results from the below commands similar to below on any of the servers:

matthew@mariadb1:~$ sudo mysql -e "show status like 'wsrep_connected'\G"
*************************** 1. row ***************************
Variable_name: wsrep_connected
        Value: ON
matthew@mariadb1:~$ sudo mysql -e "show status like 'wsrep_incoming_addresses'\G;"
*************************** 1. row ***************************
Variable_name: wsrep_incoming_addresses
        Value: 192.168.87.53:3306,192.168.87.56:3306,192.168.87.52:3306

Your Galera cluster is now ready to accept connections. When you insert data on any one of the servers, it should replicate to the other servers in the cluster. In future blog posts, I will explore different methods for load balancing a cluster.

Author’s note: in a previous revision of this post, I had separate parameters for the SST user’s password and password hash. Now, having learned about the mysql_password() function in the puppetlabs/mysql Forge module, I’ve removed the password hash parameter to simplify the directions somewhat.