Cluster setup

Requirements

The following requirements must be met before a CipherMail cluster can be set up:

  1. At least three servers (nodes) should be set up, the network configured and the initial setup wizard should be finished.

  2. Every node from the cluster is configured with a fully qualified hostname.

  3. Every host can look up the IP address of any other node.

  4. Every host can access any other node on TCP ports 22, 4444, 4567 and 4568.

To fulfill requirement 3, the best option would be to add the hostnames to the DNS. If this is not feasible, hostname to IP address mapping should be added to the hosts file on every node.

Hostname mapping

Important

Only use explicit host mapping if the hostnames cannot be added to DNS.

On every node, do the following:

  1. Open the hosts page (Admin ‣ Network ‣ Hosts).

  2. For every node, add the IP to hostname mapping.

    Example

    IP address

    Hostnames & Aliases

    2001:db8:123::1

    node1.example.com

    2001:db8:123::2

    node2.example.com

    2001:db8:123::3

    node3.example.com

Important

Don’t use loopback addresses (::1 or any address in 127.0.0.0/8) for the IP to hostname mapping. The cluster hosts must be able to reach each other using these IP addresses.

Configure cluster

To configure the cluster, use the following procedure:

  1. Configure SSH authentication.

  2. Configure which hosts should be managed by the control node.

  3. Configure which hosts are part of the cluster.

  4. Run the Ansible playbook.

Configure SSH authentication

The cluster will be configured with the Ansible configuration management system. Ansible needs root access over SSH in order to perform the configuration management tasks. These tasks are part of a playbook that can be executed from any one of the cluster nodes. Any custom overrides that you define for this Ansible playbook are saved as YAML files, which are kept synchronized between the cluster nodes. This keeps the configuration management functionality fully operational in case of problems with one of the nodes.

To allow Ansible root access to all cluster nodes, passwordless authentication must be configured:

  1. Log in to each node over SSH.

  2. Obtain the SSH public keys.

  3. Authorize the SSH keys for root login.

  4. Test passwordless login.

Log in to each node over SSH

Use an SSH client like OpenSSH to log in to each node and open the command line (File ‣ Open shell).

Obtain the SSH public keys

cat /etc/ssh/ssh_host_ecdsa_key.pub

The output from the command should look similar to:

ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABB...

Note down the complete SSH key. Perform this step on all nodes in the cluster.

Authorize the SSH keys for root login

The SSH public keys from the previous step must now be added to the list of authorized keys on all nodes.

  1. Log in to the cockpit app on node 1. The cockpit app can be accessed on https://node1.example.com:9090 (replace node1.example.com with the correct hostname). #. Open account settings for the root user (Accounts ‣ root).

    Root account
  2. Use the Add key button to add the public keys.

  3. Repeat the above steps for node 2 and 3.

Test passwordless login

  1. Log in to node 1 using SSH.

  2. On the command line try to login as root on node 1, 2 and 3.

    sudo ssh root@node1.example.com
    sudo ssh root@node2.example.com
    sudo ssh root@node3.example.com
    

    Check the fingerprint and select yes if being asked to continue. Check if the login was successful.

    Note

    Logging in using SSH should not require the password of the remote root user. However, because the command runs with sudo, you might have to provide the password for the local user.

  3. Log out of node 1.

    exit
    
  4. Repeat the above steps for node 2 and 3.

Configure which hosts should be managed

The hostnames of all the nodes should be added to the Ansible hosts file. This file, together with the whole Ansible inventory, is synchronized between all nodes at the end of each playbook run.

sudo vim /etc/ciphermail/ansible/hosts

All hostnames should be added to the ciphermail_all group. CipherMail Gateway nodes should additionally be added to ciphermail_gateway, while CipherMail Webmail nodes should be added to ciphermail_webmail.

Example
[ciphermail_all]
node1.example.com
node2.example.com
node3.example.com

[ciphermail_gateway]
node1.example.com
node2.example.com
node3.example.com

Important

The default ‘localhost’ entry should be removed in cluster setups.

Enable the MariaDB Galera cluster

The list of hostnames of all the cluster nodes should be added as an Ansible variable override.

echo "common__mysql_cluster_nodes: ['node1.example.com', 'node2.example.com', 'node3.example.com']" | sudo tee /etc/ciphermail/ansible/group_vars/all/cluster.yml

The file /etc/ciphermail/ansible/group_vars/all/cluster.yml should look similar to:

common__mysql_cluster_nodes: ['node1.example.com', 'node2.example.com', 'node3.example.com']

Note

The MariaDB Galera cluster will be bootstrapped from the first node in this list.

Run the Ansible playbook

The cluster will be configured by Ansible when running the playbook:

sudo cm-run-playbook --all-hosts

The Ansible playbook will configure the local firewall, generate certificates and keys for MariaDB, configure database replication and bootstrap the cluster. If successful, the playbook recap should look like:

PLAY RECAP *******************************************************************************************************
node1.example.com          : ok=99   changed=21   unreachable=0    failed=0    skipped=8    rescued=0    ignored=1
node2.example.com          : ok=98   changed=20   unreachable=0    failed=0    skipped=8    rescued=0    ignored=1
node3.example.com          : ok=98   changed=20   unreachable=0    failed=0    skipped=8    rescued=0    ignored=1

To check if all the nodes of the cluster are active, use the following command:

sudo cm-cluster-control --show

wsrep_cluster_size should report that three nodes are active:

+--------------------------+--------------------------------------+
| Variable_name            | Value                                |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id    | 8                                    |
| wsrep_cluster_size       | 3                                    |
| wsrep_cluster_state_uuid | 823e389b-eb11-11eb-9b32-d3c924e58f21 |
| wsrep_cluster_status     | Primary                              |
| wsrep_connected          | ON                                   |
| wsrep_gcomm_uuid         | 95e32cea-eb11-11eb-abd2-2bef173638db |
| wsrep_last_committed     | 0                                    |
| wsrep_local_state_uuid   | 823e389b-eb11-11eb-9b32-d3c924e58f21 |
| wsrep_ready              | ON                                   |
+--------------------------+--------------------------------------+
+-----------------------+---------------------------------------------------------------+
| Variable_name         | Value                                                         |
+-----------------------+---------------------------------------------------------------+
| wsrep_cluster_address | gcomm://node1.example.com,node2.example.com,node3.example.com |
| wsrep_cluster_name    | ciphermail                                                    |
| wsrep_node_address    | node1.example.com                                             |
| wsrep_node_name       | node1.example.com                                             |
+-----------------------+---------------------------------------------------------------+

Troubleshooting

If the playbook runs into an issue with one of the nodes, the play recap will report a failure. If the playbook fails, it is advised to reset the cluster configuration and go over all the required steps and then re-run the playbook.

To reset the complete cluster config, run the following command:

sudo ANSIBLE_CONFIG="/usr/share/ciphermail-ansible/ansible.cfg" ansible -m command -a 'rm /etc/my.cnf.d/ciphermail-cluster.cnf /var/lib/mysql/grastate.dat /etc/pki/tls/private/ciphermail.key /etc/pki/tls/private/ciphermail.pem /etc/pki/tls/certs/ciphermail.crt /etc/pki/tls/certs/ciphermail-ca.crt' ciphermail_all

Warning

Only run the above command when setting up the cluster. Do not run this on an already configured and functional cluster.

Then redo all the steps to set up the cluster and re-run the playbook.