Keep talking

Keep talking

In the previous post we configured three devuan servers from scratch using ansible adding the pgdg repository to the apt sources and installing the PostgreSQL binaries.

This tutorial will revisit the apt role’s configuration and will introduce a new role for configuring the postgres operating system’s user for passwordless ssh connections to each other server.

A declarative approach

When we configured the apt role we used the postgres operating system user hardcoded in the role. This approach will lock us with the assumption that we’ll always use a the default user for running the postgres process. This may lead to problems if we have to deal with any requirement which needs us to diverge from the default setup (e.g. PCI compliance).

Using the variables for storing specific configuration parameters is a better approach which gives great flexibility.

To declare the variables that we want available for any role we can use the wildcard file all in the group_vars folder. The variables defined in this file have the minimal precedence and can be overridden by the same variables defined in another the group_vars file or using an host_vars configuration.

We setup the all file with 4 variables.

pg_home_dir: "/home/postgres"
pg_shell: "/bin/bash"
pg_osuser: "postgres"
pg_osgroup: "postgres"
postgresql_common_include: "/etc/postgresql-common/createcluster.d"

The variable pg_home_dir is used to configure the PostgreSQL home directory which diverges from the package’s default /var/lib/postgresql.

We assign an explicit shell for our user with the variable pg_shell, which is /bin/bash.

The two variables pg_osuser,pg_osgroup are used to define the operating system’s user and main group that will run the PostgreSQL database processes.

The variable postgresql_common_include used to define a custom configuration directory for the postgresql-common options.

We are adding some new tasks to the apt role.

We’ll first create the group pg_osgroup with the module group and then, using the ansible module user, we’ll create an user with the system attribute set to Yes, because its scope is to manage background processes. With system: yes the operating sytstem will assign to the user an UID from a reserved range.

The variables pg_osuser, pg_home_dir,pg_shell are used to set the correspondent attributes for our user.

- name: Ensure group {{ pg_osgroup }} exists
    name: "{{ pg_osgroup }}"
    state: present

- name: create the {{ pg_osuser }} user and the home directory
    name: "{{ pg_osuser }}"
    group: "{{ pg_osgroup }}"
    create_home: yes
    move_home: yes
    home: "{{ pg_home_dir }}"
    shell: "{{ pg_shell }}"
    system: yes
    state: present

Another adjustment is required because the versioned postgresql packages will create a default cluster main during first install. However it is a better idea to have the full control when configuring the clusters. In our example we are disabling the automatic creation with just defining a variable create_main_cluster for the group apt and adding a configuration file in the postgresql_common_include directory.

The apt role is changed to have two new tasks executed between the installation of the common and the versioned packages.

- name: create include directory for createcluster
    path: "{{ postgresql_common_include }}"
    owner: "{{ pg_osuser }}"
    group: "{{ pg_osgroup }}"
    state: directory
    mode: 0744

- name: add create main_cluster line in custom configuration using the variable create_main_cluster
    path: "{{postgresql_common_include}}/main_cluster.conf"
    line: "create_main_cluster = {{ create_main_cluster }}"
    create: Yes
    owner: "{{ pg_osuser }}"
    group: "{{ pg_osgroup }}"
    mode: 0744

The first task ensures the include directory is present and owned by the postgresql os user and group. The second task creates a file in this include directory with the line create_main_cluster = to the value assigned to the variable create_main_cluster defined in the apt group_vars file. In our example we set in group_vars/apt the variable create_main_cluster: No which disables the creation of the main clusters at the first install.

The SSH role

In order to setup the ssh configuration we need to declare some variables in the file group_vars/ssh.

The variable key_dest_dir is used to set where to save the public keys when fetching the file for the server’s for the public key exchange.

Then in the rest of variables we are setting the bits, the key type and the file name for the ssh key.

key_dest_dir: "keys/"
ssh_key_bits: "2048"
ssh_key_type: "rsa"
ssh_key_file: "id_rsa"

The ssh role comes with several tasks. The first task uses the user module for creating the ssh key pair in the pg_osuser’s home directory.

- name: create the ssh key pairs for the postgresql user
    name: "{{ pg_osuser }}"
    generate_ssh_key: yes
    ssh_key_bits: "{{ ssh_key_bits }}"
    ssh_key_type: "{{ ssh_key_type }}"
    ssh_key_file: "{{ pg_home_dir }}/.ssh/{{ ssh_key_file }}"

Then we fetch the generated public keys into the key_dest_dir folder.

Using the module authorized_key which iterates over the hosts present in groups.ssh we put the public keys in the autorized_keys file on each server belonging to the group.

The host’s public key file is loaded using the lookup plugin file . The fetched public key’s path is determined using the item hostvars’s inventory_hostname, pg_home_dir and ssh_key_file

- name: Fetch the keys in the keys directory
    src: "{{ pg_home_dir }}/.ssh/{{ssh_key_file}}.pub"
    dest: "{{ key_dest_dir }}"

- name: setup the autorized_keys file on the servers with the public keys
    user: "{{ pg_osuser }}"
    state: present
    key: "{{ lookup('file', key_dest_dir + '/' + hostvars[item]['inventory_hostname'] + '/'+ hostvars[item]['pg_home_dir'] +'/.ssh/' + hostvars[item]['ssh_key_file'] +'.pub') }}"
  with_items: "{{ groups.ssh }}"

At this point we can login into any other server using the pg_osuser. However when connecting for the first time ssh will ask whether we want to add the server signature to known_hosts and having an interactive behaviour may be a problem when working in a not interactive way.

In order to setup things properly we shall configure known_hosts with the server’s signatures and the known_hosts ansible module may be an option.

However the module page states If you have a very large number of host keys to manage, you will find the template module more useful. In our example we are using just 3 server. But we want to build something that can be used for an arbitrary number of machines configured.

Therefore we’ll use the blockinfile ansible module instead.

This module is particularly handy as we want to manage specific blocks inside the file known_hosts, one block per each server signature.

For scanning and adding the keys we’ll use the lookup plugin pipe combined with the program ssh-keyscan iterating over the groups.ssh hosts.

- name: populate known_hosts for the servers
    dest: "{{ pg_home_dir }}/.ssh/known_hosts"
    create: yes
    state: present
    owner: "{{ pg_osuser }}"
    block: "{{ lookup('pipe', 'ssh-keyscan -T 10 ' ~ hostvars[item]['inventory_hostname']) }}"
    marker: "# {mark} ANSIBLE MANAGED BLOCK {{ item }}"
  with_items: "{{ groups.ssh }}"

The known_hosts’s contents then will be something like that.

backupsrv ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDaPCBsisffhrVCQkpjyv3Gj4XX8h9G0nakf7P5VqEkGv7vawzUS9aC1x3vZNB9ItC1Z0ulzwvbLyujal0Iwk0ZfAM2cUiuJTPnPJqBfc8MXLGB9mGPpKJaayY1XphJySrjL+8NMhYqx5zwCmxDxnYC58t1pi8xNe6nDm3hSf4L+S4K/zpwcfiheYEJ4Bk/5e+Ry4tBdMQDgf6Ayd/ObColIxfyd1/7yKss5OB78UIt7Rcwv+PbInkUHvvZvoeOzDAl2+aIL8+oLaQTL5IAX1iPoxKXtyyWxTyN8HtIaV8qy6KiIOyP6kRrQFI15n7m3jML/mBH6u2JJJB7w6QuYKV1
backupsrv ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFhNx0TJhLYwGo8I1t3DUaNEmqlfBbcSRHTJZyZ+qpN6XAMo2W1+L8S0oo47mawnaNOE3lMO2MNQrKzcSqSiB14=
backupsrv ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICdMobjRM8jVldlV1tJwNaZIOeWilM0svt2eAunbm8Ww

The last task in our ssh role is to ship the ssh configuration file to any server in the group ssh.

This task uses the template ansible module in order to set the correct configuration per each host.

- name: Ship the ssh configuration file
    src: config.j2
    dest: "{{ pg_home_dir }}/.ssh/config"
    owner: "{{ pg_osuser }}"
    group: "{{ pg_osgroup }}"
    mode: 0600

The template itself is very simple with a loop over any host in the group ssh and a value assignement using the keys hostname,pg_osuser,ssh_key_file retrieved from the server’s hostvars.

{% for host in groups['ssh'] %}
Host {{ hostvars[host]['inventory_hostname'] }}
  User {{ hostvars[host]['pg_osuser'] }}
  IdentityFile ~/.ssh/{{ hostvars[host]['ssh_key_file'] }}
{% endfor %}

The playbook in action

This asciinema shows the playbook run and the final result when trying to ssh from one server to another.

The rollback playbook

The rollback adds two new tasks to the apt rollback file and a new task file for ssh. A new variable rbk_ssh activates the rollback for ssh.

The apt rollback now removes the user and group pg_osuser and pg_osgroup.

The rollback task for ssh removes the directory .ssh for the pg_osuser.

Wrap up

This tutorial shows how to adjust ansible in a declarative way, giving us great flexibility. We also are taking a step forward for having a set of three servers capable to talk each other via ssh without password.

In the next tutorial we’ll see how to complete the configuration of the postgres user adding the missing groups required for accessing specific directories and how to create, configure and start the database clusters.

The branch with the example explained in the tutorial is available here.

Thanks for reading.

Winter seascape by Federico Campoli