Prepare a Jenkins Docker Build Node with Ansible

Recently I have started to take more time to learn Ansible, building role-based projects similar to what I’ve always done with Puppet, as opposed to simple monolithic playbooks. I still believe Puppet is superior to Ansible when a host has a long list of items to be managed, whereas Ansible excels for more narrowly-scoped tasks such as pushing some files out and restarting a service. However, I’m sure that most would disagree with me, and in my lab I have chosen to use Ansible for most tasks in keeping with the latest trend. For this case, I created a simple Ansible project to set up a CentOS/EL8 Jenkins build node running Docker. The directory structure of this project is below:

├── inventory
├── jenkins_build.yml
├── roles
    ├── docker
    │   └── tasks
    │       ├── install.yml
    │       ├── main.yml
    │       └── service.yml
    ├── jenkins_node
        ├── files
        │   └── jenkins.pub
        └── tasks
            ├── install.yml
            ├── main.yml
            └── user.yml

First, I created a simple inventory file. The inventory just has one host for now with no variables.

[jenkins_build]
jenkins-node2

I then created a simple playbook, jenkins_build.yml, that includes the two roles I need.

---
- hosts: jenkins_build
  roles:
    - docker
    - jenkins_node

Next, I created the roles with mkdir -p roles/docker/tasks and mkdir -p roles/jenkins_node/{tasks,files}. This project is extremely simple; it does not use templates, variables, handlers, etc. and probably could have just used a monolithic playbook for brevity’s sake. However, I decided to use the full directory structure so that the roles could be reused later.

First, I’ll go over the Docker role. All it does is install the docker-ce packages from Docker and ensures that the service is running. The install yml file also needs to install the Docker GPG key:

---
# install.yml
  - rpm_key: state=present key=https://download.docker.com/linux/centos/gpg

  - yum_repository:
      name: docker-ce-stable
      description: Docker CE Stable - $basearch
      baseurl: https://download.docker.com/linux/centos/$releasever/$basearch/stable
      gpgcheck: yes
      gpgkey: https://download.docker.com/linux/centos/gpg

  - yum: name=docker-ce state=installed

---
# service.yml
  - service: name=docker state=started enabled=yes

main.yml includes both of the above:

  - include: install.yml
  - include: service.yml

Next, the Jenkins node role: this role installs the required packages and creates the jenkins user. First, you will need a generate a password hash for the Jenkins user. To do so, execute the below Python one-liner:

python -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass("Confirm: ")) else exit())'

Then include the resulting hash in your user.yml file:

---
  - user:
      name: jenkins
      state: present
      password: 'hash'
      group: users
      groups:
        - docker
  - ansible.posix.authorized_key:
      user: jenkins
      key: "{{ lookup('file', 'jenkins.pub') }}"
      state: present

If you want your Jenkins server to connect to the node using an SSH key, you will need to place the public key in a file located in files. I created this file as jenkins.pub. Then create the .yml files that install the necessary packages and include the tasks.

# install.yml
---
  - yum:
      name:
        - git
        - java-1.8.0-openjdk
      state: installed

# main.yml
---
  - include: install.yml
  - include: user.yml

Finally, run the playbook against the Jenkins build node from a host with Ansible installed. You might want to run it first with the -C option to ensure that it does what you expect it to do:

ansible-playbook -Kkb -i inventory -D jenkins_build.yml

If this was successful, you should then be able to add the node to Jenkins, located at Dashboard > Manage Jenkins > Manage Nodes and Clouds > New Node:

Setting up a Jenkins build node. The Labels section is optional, but is a method of restricting on which node a job can be run.

This concludes my blog post. This is a rather simple Ansible task, but demonstrates a use case for it, especially if you are setting up a bunch of build nodes for Jenkins. In my next post I will show how I configured a Jenkins job to build an RPM in a Docker container on the node I added here.

A few notes on building Hyper-V systems out of Foreman

I use Foreman for provisioning systems in both my lab and at work. For the most part I’ve had success over the years kickstarting CentOS/Enterprise Linux systems from Foreman using both PXE booting and the lightweight iPXE ISO. These include various generations of HP servers, custom-built desktops, and the following virtual machine types: Virtual Box, KVM, Xen, and VMWare. However, I have had a little more difficulty with Microsoft Hyper-V, but I have managed to get it to work on both Generation 1 and 2 VM types. In this post I will share some of my tips for getting this to work. This is not meant to be an in-depth guide. It assumes that you have a working Foreman installation running the latest release.

First of all, PXE booting does in fact work with Hyper-V. You will, however, need to choose the Generation 1 VM type and ensure that the NIC is a “Legacy Network Adapter.” It’s quite probable that a legacy NIC provides worse performance than a standard NIC, similar to how an E1000 NIC is inferior to a VMXNET3 NIC in VMWare. But for lab purposes it’s probably fine.

Generation 1 VM with Legacy NIC settings

It’s pretty clear that Microsoft intended the Generation 1 VM to be as “legacy” as possible. I find it amusing that it even emulates COM ports and a diskette drive. In any case, this is the only configuration from which I’ve been able to PXE boot a system from Foreman. For all other scenarios, I had to use the full boot ISO. The lightweight “host” ISO has not worked in my experience with Hyper-V (but works fine with other virtualization implementations of course).

Full disclaimer: I’ve only tested this with the current version of Foreman (as of this writing, 2.3). I don’t know if you can kickstart Hyper-V systems off the earlier versions of Foreman, though EFI boot disk functionality was added in 2.1 (EFI is required for a Generation 2 VM). When my organization was on Foreman 1.18, the only method that seemed to work for my Hyper-V administrator was to use a custom ISO I had generated from the Enterprise Linux 7 boot disk. However, with version 2.3 I can build both generation 1 and 2 VMs using the full boot disk.

The Foreman Boot Disk drop down. My experience is that only the full host image works.

Before creating your host, you will need to have the following template types associated with your host’s operating system version: Provisioning (your Kickstart file), PXELinux, iPXE, and PXEGrub2. This has always been annoying to do in Foreman, because you have to first make the template available to the OS under Provisioning Templates > template > Associate, then enable it at Operating Systems > OS > Templates.

In addition, if you are going to build a Generate 2 VM, you will need to include an EFI partition in your partition table.

A Foreman partition table for EL7 or greater, with an EFI partition.

Once these prerequisites have been met, you should be ready to create your VM. When creating a Generation 2 VM in Hyper-V, make sure to disable Secure Boot. Otherwise, the default settings should be sufficient.

Hyper-V Generation 2 VM settings for a Linux VM

Over in Foreman, the steps for creating a host are mostly the same as with other virtualization types. Under the Operating System tab: for your PXE loader, choose PXELinux BIOS for a Generation 1 VM or Grub2 UEFI for a Generation 2 VM.

Fill out all the remaining required fields and click Submit at the bottom of the page. If the host saves correctly, you should then be able to download the full ISO, mount it to your Hyper-V virtual machine, and Kickstart a VM from it.

I hope you’ve found these tips to be useful, if you’ve encountered a need to build Hyper-V VMs out of Foreman. I should mention also that I performed all my testing on a Windows 10 system and not with the server implementations of Hyper-V.

Build an RPM of the Mongo C Driver for CentOS 7

Greetings! It has been a while since I’ve posted anything in my systems administration blog. In this post I will describe my process for building a newer version of the Mongo C driver for Enterprise Linux/CentOS 7. These steps were also performed on an Enterprise Linux 6 system, but this post will focus on EL7 solely.

Last year I was asked by a developer to obtain a newer version of the Mongo C Driver for CentOS 7, as the one currently available in the EPEL repository, 1.3.6, does not support the most recent versions of MongoDB. I could not find a guide on the Internet for building an RPM of the newer version, so I was required to piece together a solution on my own. Originally I built the package on a build virtual machine; however, I later chose to use Docker, as the package requires several dependencies and I did not want to break anything on the build server. This example uses Docker, but it should work on a standard CentOS system as well.

To begin, I obtained a spec file for the latest mongo-c-driver, for Fedora. At the time I used the FC34 mongo-c-driver-1.17.1-1 SRPM from Remi’s RPM repository and modified it to make it to work on CentOS 6 and 7. All credit goes to Remi Collet; I merely adapted this spec file for my own needs. Expand the below to see the example spec file:

mongo-c-driver.spec
# remirepo spec file for mongo-c-driver
#
# Copyright (c) 2015-2020 Remi Collet
# License: CC-BY-SA
# http://creativecommons.org/licenses/by-sa/4.0/
#
%global gh_owner     mongodb
%global gh_project   mongo-c-driver
%global libname      libmongoc
%global libver       1.0
%global up_version   1.17.4
#global up_prever    rc0
# disabled as require a MongoDB server
%bcond_with          tests

Name:      mongo-c-driver
Summary:   Client library written in C for MongoDB
Version:   %{up_version}%{?up_prever:~%{up_prever}}
Release:   1%{?dist}
# See THIRD_PARTY_NOTICES
License:   ASL 2.0 and ISC and MIT and zlib
URL:       https://github.com/%{gh_owner}/%{gh_project}

Source0:   https://github.com/%{gh_owner}/%{gh_project}/releases/download/%{up_version}%{?up_prever:-%{up_prever}}/%{gh_project}-%{up_version}%{?up_prever:-%{up_prever}}.tar.gz

BuildRequires: cmake3
BuildRequires: gcc
# pkg-config may pull compat-openssl10
BuildRequires: openssl-devel
%if %{with tests}
BuildRequires: mongodb-server
BuildRequires: openssl
%endif
# From man pages

Requires:   %{name}-libs%{?_isa} = %{version}-%{release}
Requires:   libmongocrypt
# Sub package removed
Obsoletes:  %{name}-tools         < 1.3.0
Provides:   %{name}-tools         = %{version}
Provides:   %{name}-tools%{?_isa} = %{version}

%description
%{name} is a client library written in C for MongoDB.

%package libs
Summary:    Shared libraries for %{name}

%description libs
This package contains the shared libraries for %{name}.

%package devel
Summary:    Header files and development libraries for %{name}
Requires:   %{name}%{?_isa} = %{version}-%{release}
Requires:   pkgconfig
Requires:   pkgconfig(libzstd)
Requires:   libmongocrypt

%description devel
This package contains the header files and development libraries
for %{name}.

Documentation: http://mongoc.org/libmongoc/%{version}/

%package -n libbson
Summary:    Building, parsing, and iterating BSON documents
# Modified (with bson allocator and some warning fixes and huge indentation
# refactoring) jsonsl is bundled .
# jsonsl upstream likes copylib approach and does not plan a release
# .
Provides:   bundled(jsonsl)

%description -n libbson
This is a library providing useful routines related to building, parsing,
and iterating BSON documents .

%package -n libbson-devel
Summary:    Development files for %{name}
Requires:   libbson%{?_isa} = %{version}-%{release}
Requires:   pkgconfig

%description -n libbson-devel
This package contains libraries and header files needed for developing
applications that use %{name}.

Documentation: http://mongoc.org/libbson/%{version}/

%prep
%setup -q -n %{gh_project}-%{up_version}%{?up_prever:-%{up_prever}}

%build
%cmake3 \
    -DENABLE_BSON:STRING=ON \
    -DENABLE_MONGOC:BOOL=ON \
    -DENABLE_SHM_COUNTERS:BOOL=ON \
    -DENABLE_SSL:STRING=OPENSSL \
    -DENABLE_SASL:STRING=CYRUS \
    -DENABLE_MONGODB_AWS_AUTH:STRING=ON \
    -DENABLE_ICU:STRING=ON \
    -DENABLE_AUTOMATIC_INIT_AND_CLEANUP:BOOL=OFF \
    -DENABLE_CRYPTO_SYSTEM_PROFILE:BOOL=ON \
    -DENABLE_MAN_PAGES:BOOL=ON \
    -DENABLE_STATIC:STRING=OFF \
%if %{with tests}
    -DENABLE_TESTS:BOOL=ON \
%else
    -DENABLE_TESTS:BOOL=OFF \
%endif
    -DENABLE_EXAMPLES:BOOL=OFF \
    -DENABLE_UNINSTALL:BOOL=OFF \
    -DENABLE_CLIENT_SIDE_ENCRYPTION:BOOL=ON \
    -S .

%if 0%{?cmake_build:1}
%cmake_build
%else
make %{?_smp_mflags}
%endif

%install
%if 0%{?cmake_install:1}
%cmake_install
%else
make install DESTDIR=%{buildroot}
%endif

: Static library
rm -f  %{buildroot}%{_libdir}/*.a
rm -rf %{buildroot}%{_libdir}/cmake/*static*
rm -rf %{buildroot}%{_libdir}/pkgconfig/*static*
: Documentation
rm -rf %{buildroot}%{_datadir}/%{name}

%check
ret=0

%if %{with tests}
: Run a server
mkdir dbtest
mongod \
  --journal \
  --ipv6 \
  --unixSocketPrefix /tmp \
  --logpath     $PWD/server.log \
  --pidfilepath $PWD/server.pid \
  --dbpath      $PWD/dbtest \
  --fork

: Run the test suite
export MONGOC_TEST_OFFLINE=on
export MONGOC_TEST_SKIP_MOCK=on
#export MONGOC_TEST_SKIP_SLOW=on

make check || ret=1

: Cleanup
[ -s server.pid ] && kill $(cat server.pid)
%endif

if grep -r static %{buildroot}%{_libdir}/cmake; then
  : cmake configuration file contain reference to static library
  ret=1
fi
exit $ret

%files
%{_bindir}/mongoc-stat

%files libs
%{!?_licensedir:%global license %%doc}
%license COPYING
%license THIRD_PARTY_NOTICES
%{_libdir}/%{libname}-%{libver}.so.*

%files devel
%doc src/%{libname}/examples
%doc NEWS
%{_includedir}/%{libname}-%{libver}
%{_libdir}/%{libname}-%{libver}.so
%{_libdir}/pkgconfig/%{libname}-*.pc
%{_libdir}/cmake/%{libname}-%{libver}
%{_libdir}/cmake/mongoc-%{libver}
%{_mandir}/man3/mongoc*

%files -n libbson
%license COPYING
%license THIRD_PARTY_NOTICES
%{_libdir}/libbson*.so.*

%files -n libbson-devel
%doc src/libbson/examples
%doc src/libbson/NEWS
%{_includedir}/libbson-%{libver}
%{_libdir}/libbson*.so
%{_libdir}/cmake/libbson-%{libver}
%{_libdir}/cmake/bson-%{libver}
%{_libdir}/pkgconfig/libbson-*.pc
%{_mandir}/man3/bson*
  

I placed this spec file in a work directory, then downloaded the mongo-c-driver tarball from https://github.com/mongodb/mongo-c-driver/releases into the same work directory. I also created two Yum repo files for installing some of the dependencies for the build. These were libmongocrypt and MongoDB (I chose to install version 4.0):

[libmongocrypt]
name=libmongocrypt repository
baseurl=https://libmongocrypt.s3.amazonaws.com/yum/redhat/$releasever/libmongocrypt/1.0/x86_64
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/libmongocrypt.asc

[mongodb-org-4.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc

Next, I created a Dockerfile in the work directory that installs the required dependencies and builds the image.

FROM centos:7
MAINTAINER Matt Ridpath matt@example.com
COPY libmongocrypt.repo /etc/yum.repos.d
COPY mongodb.repo /etc/yum.repos.d
RUN yum install -y epel-release
RUN yum install -y rpm-build icu libicu-devel python-sphinx python2-sphinx snappy cmake3 libzstd-devel libmongocrypt mongodb-org-server gcc openssl-devel cyrus-sasl-devel
WORKDIR /root
RUN mkdir -p rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
COPY mongo-c-driver-*.tar.gz rpmbuild/SOURCES
COPY mongo-c-driver.spec rpmbuild/SPECS

Now the container image can be built:

sudo docker build --pull -t="matt/mongo-c-driver" .

Next, I created a directory for the container to mount as a pass-through volume, so that it has a location to copy the finished RPMs into. I chose to create an “artifacts” directory within my work directory, as this task would later be turned into a Jenkins job. You can chose another location, however. The option follows the format of -v <local_dir>:<container_dir>. Once the directory had been created, I ran the container to build the RPMs:

sudo docker run --rm -v $HOME/mongo-c-driver/artifacts:/artifacts matt/mongo-c-driver /bin/bash -c "cd rpmbuild/SPECS && rpmbuild -ba mongo-c-driver.spec && cp ../RPMS/x86_64/*.rpm /artifacts"

If the RPM builds successfully, the RPMs should be dropped into the specified local directory in the -v option and made available for installation.

matt@docker:~$ ls mongo-c-driver/artifacts/
libbson-1.17.4-1.el7.x86_64.rpm
libbson-devel-1.17.4-1.el7.x86_64.rpm
mongo-c-driver-1.17.4-1.el7.x86_64.rpm
mongo-c-driver-debuginfo-1.17.4-1.el7.x86_64.rpm
mongo-c-driver-devel-1.17.4-1.el7.x86_64.rpm
mongo-c-driver-libs-1.17.4-1.el7.x86_64.rpm

In a future post, I will show how I turned this task into a Jenkins job, so that one can build these packages regularly as new versions are released.

Finally, a disclaimer: I’m not an expert on Docker or building RPMs. Someone else more than likely has a better solution for achieving this end result. Rather, I would merely like to share how I arrived at a solution for building these packages and hope it will save someone else the trial and error I went through.

Credits:

1) Remi Collet for providing the spec file to modify for this task: https://rpms.remirepo.net/.
2) mongo-c-driver contributors and authors for providing the source tarball: https://github.com/mongodb/mongo-c-driver.
3) This guide for providing an example of how to build RPMs with Docker: http://saule1508.github.io/build-rpm-with-docker/.