AWS Storage Blog

Deploying self-managed MariaDB high availability using Amazon EBS Multi-Attach

For most organizations, the availability of workloads is a key performance indicator affecting operations ensuring goods, services, and critical business transactions. Availability needs vary from workload to workload and are aligned with an organization’s business requirements and the criticality of their services. To learn more about how to architect in AWS to meet your availability goals, we recommend reading the AWS Well-Architected Framework.

When deploying relational databases in AWS, we always first recommend the fully managed Amazon Relational Database Service (RDS). Amazon RDS makes it easy to use replication to enhance availability and reliability for production workloads. We will be talking about a single Availability Zone (AZ) configuration in this blog, but Amazon RDS offers a more resilient Multi-AZ deployment option that enables mission-critical workloads through high availability and built-in automated failover from a primary database to a synchronously replicated secondary database. Amazon RDS also has Read Replicas so you can scale out beyond the capacity of a single database deployment for read-heavy database workloads. Simply put, Amazon RDS removes much of the complexity of managing and maintaining your database, so you can focus on your business needs. Amazon RDS for MariaDB provides a predictable and cost-effective platform to operate, scale, and manage your MariaDB database in AWS.

Some customers will have specific and sometimes complex reasons where Amazon RDS might not apply and choose to self-manage their MariaDB databases using Amazon Elastic Compute Cloud (EC2) and Amazon Elastic Block Store (EBS). This blog is for those customers that want to build a MariaDB cluster in a single AZ, using EC2 instances and EBS volumes. In this blog, we show you how to implement higher availability (HA) of your MariaDB server running on EC2 instances in a single AZ leveraging EBS volumes with Amazon EBS Multi-Attach enabled. We’ll go through the process of setting up the application environment using Multi-Attach enabled EBS volumes using two EC2 instances that are part of a Linux cluster and the steps to mitigate a node failure by automatically switching over to another node in the cluster. The solution will help you improve availability of your database by adding a standby instance to improve availability due to instance unavailability.

This is ideal if you are:

  • Considering self-managed MySQL/MariaDB databases on Amazon EC2 using EBS volumes
  • Managing database workloads with low latency requirements and with failover within a single AZ

Important Note: AWS does not recommend a single-AZ deployment for workloads that have greater than 99% availability requirements. see AWS Well-Architected Framework for best practices.

Solution overview

Here are the high-level steps for setting up the MariaDB cluster on Multi-Attach enabled EBS volumes:

  1. Setting up EC2 instances and Multi-Attach enabled EBS volumes
  2. Installing the cluster software
  3. Configuring the cluster
  4. Setting up MariaDB
  5. Testing instance failover 

And here is the high-level architecture of the solution.

In this solution, 3 EC2 instances are launched in a single AZ, of which 2 instances named ‘ma-host-1’ and ‘ma-host-2’ are attached to the same block storage using Amazon EBS, form the MariaDB cluster named ‘macluster’. The 3rd EC2 instance is used for connecting with the MariaDB server using mySQL client. With this architecture, clients access the MariaDB server through a floating IP address, called a Virtual IP (VIP).

In this solution, three EC2 instances are launched in a single AZ, of which two instances named ‘ma-host-1’ and ‘ma-host-2’ form the cluster named ‘macluster’. Along with the MariaDB server, the following components are installed on the two instances to coordinate among the cluster instances:

  • Pacemaker: an open-source high-availability cluster resource manager software. Resources are services on a host that need to be kept highly available. In this case, MariaDB is the cluster resource.
  • Corosync: an open-source group communication system that detects component failures and orchestrates necessary failover procedures to minimize interruptions to applications.
  • fence_aws agent: an open source I/O fencing agent for AWS, which uses the boto3 library internally. Fencing is a critical component that prevents I/O from instances that are unresponsive on the EC2 cluster network but still have access to the shared EBS volume.

The client EC2 instance is used for connecting with the MariaDB server using MySQL client. With this architecture, users access the MariaDB server through a floating IP address, called a Virtual IP (VIP). In the event of failure of an EC2 instance that serves the requests, the server installed on the other EC2 instance in the cluster is activated to start serving the client requests.

Prerequisites

Follow the steps in this post here (just before Setting up GFS2) to perform the following operations:

  1. Provision two EC2 instances and a Multi-Attach enabled EBS volume, and attach the volume to those instances.
  2. Install and configure the cluster software: Corosync, Pacemaker, and STONITH.

Walkthrough

After the instances are provisioned, attached to the shared volume, and the cluster software is installed everywhere, take the following steps to deploy and manage a highly available MariaDB server on clustered instances accessing the shared volume:

  1. Configure Logical Volume Manager (LVM)
  2. Install and configure MariaDB to use the shared EBS volume
  3. Volume group exclusive activation
  4. Configure MariaDB Cluster resources and connect with the MariaDB server
  5. Create database schema and tables
  6. Provision a third EC2 instance, install MySQL client and connect using the VIP
  7. Test failover by making one of the nodes to standby in your cluster

Step 1: Configure Logical Volume Manager (LVM)

1. Create the LVM logical volume called lv1 from a volume group called clustervg by incorporating 100% of the free space on the physical volume and mounted at /dev/sdf in one of the instances (ma-host-1).

[ec2-user@ma-host-1 ~]$ sudo pvcreate /dev/sdf
Physical volume "/dev/sdf" successfully created.
[ec2-user@ma-host-1 ~]$ sudo vgcreate clustervg /dev/sdf
Volume group "clustervg" successfully created
[ec2-user@ma-host-1 ~]$ sudo lvcreate -l 100%FREE -n lv1 clustervg
Logical volume "lv1" created.

2. Create a file system on the logical volume. XFS is our chosen file system in this example because it is a highly scalable, high performance, robust, and mature 64-bit journaling file system that supports very large files (size up to 1024 TiB) and is supported by most Linux distributions.

[ec2-user@ma-host-1 ~]$ sudo mkfs.xfs /dev/clustervg/lv1
meta-data=/dev/clustervg/lv1     isize=512    agcount=4, agsize=3276544 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0
data     =                       bsize=4096   blocks=13106176, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=6399, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Step 2: Install and configure MariaDB to use the shared EBS volume

1. Install MariaDB on both instances and remove the default data directory located at /var/lib/mysql so you can mount them using the logical volume you have created before.

[ec2-user@ma-host-1 ~]$ sudo yum install mariadb mariadb-server
[ec2-user@ma-host-2 ~]$ sudo yum install mariadb mariadb-server
[ec2-user@ma-host-1 ~]$ sudo rm -rf /var/lib/mysql/*
[ec2-user@ma-host-2 ~]$ sudo rm -rf /var/lib/mysql/*

2. Mount the shared storage volume temporarily on either instances at /var/lib/mysql directory to configure MariaDB database.

[ec2-user@ma-host-1 ~]$ sudo mount -t xfs /dev/clustervg/lv1 /var/lib/mysql

3. Disable the MariaDB service on both instances.

[ec2-user@ma-host-1 ~]$ sudo systemctl disable mariadb.service 
[ec2-user@ma-host-2 ~]$ sudo systemctl disable mariadb.service

4. Configure MariaDB database on the shared storage and verify the content in /var/lib/mysql.

[ec2-user@ma-host-1 ~]$ sudo mysql_install_db --datadir=/var/lib/mysql --user=mysql 
[ec2-user@ma-host-1 ~]$ ls /var/lib/mysql aria_log.00000001  
aria_log_control  mysql  performance_schema  test

5. Edit /etc/my.cnf configuration file on both the instances and make the following changes.

[ec2-user@ma-host-1 ~]$ sudo cp -p /etc/my.cnf /etc/my.cnf_orgnl 
[ec2-user@ma-host-1 ~]$ sudo  vi /etc/my.cnf 
[ec2-user@ma-host-2 ~]$ sudo cp -p /etc/my.cnf /etc/my.cnf_orgnl 
[ec2-user@ma-host-2 ~]$ sudo  vi /etc/my.cnf 
[mysqld] 
datadir=/var/lib/mysql 
socket=/var/lib/mysql/mysql.sock 
# Disabling symbolic-links is recommended to prevent assorted security risks 
symbolic-links=0 
[mysqld_safe] 
log-error=/var/lib/mysql/log/mariadb.log 
pid-file=/var/lib/mysql/run/mariadb.pid 
!includedir /etc/my.cnf.d

6. Note that in the my.cnf file, log-error and pid-file, paths are changed. Therefore, create the appropriate directories on the shared storage.

[ec2-user@ma-host-1 ~]$ sudo mkdir -p /var/lib/mysql/log 
[ec2-user@ma-host-1 ~]$ sudo mkdir -p /var/lib/mysql/run 
[ec2-user@ma-host-1 ~]$ sudo chown mysql:mysql /var/lib/mysql/log 
[ec2-user@ma-host-1 ~]$ sudo chown mysql:mysql /var/lib/mysql/run

7. Start MariaDB service on the shared storage, verify its status, configure MariaDB root password, verify the ability to log in with the configured password, and then unmount /var/lib/mysql after stopping the MariaDB service. For more information about configuring root passwords and database security, see mysql_secure_installation.

[ec2-user@ma-host-1 ~]$ sudo systemctl start mariadb.service 
[ec2-user@ma-host-1 ~]$ sudo systemctl status 
mariadb.service 
mariadb.service - MariaDB database server 
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; vendor preset: disabled) 
Active: active (running) since … 
ma-host-1 systemd[1]: Started MariaDB database server. 
[ec2-user@ma-host-1 ~]$ mysql_secure_installation 
[ec2-user@ma-host-1 ~]$ mysql -uroot -p 
[ec2-user@ma-host-1 ~]$ sudo systemctl stop mariadb.service 
[ec2-user@ma-host-1 ~]$ sudo unmount /var/lib/mysql

Step 3: Volume group exclusive activation

If the volume group is activated outside of the cluster, then there is a risk of data corruption. To overcome this issue, make the volume group entry in /etc/lvm/lvm.conf file on each cluster instance which allows only the cluster to activate the volume group.

1. Stop the cluster service in one of the instances.

[ec2-user@ma-host-1 ~]$ sudo pcs cluster stop --all 
ma-host-2: Stopping Cluster (pacemaker)... 
ma-host-1: Stopping Cluster (pacemaker)... 
ma-host-1: Stopping Cluster (corosync)... 
ma-host-2: Stopping Cluster (corosync)...

2. Disable and stop lvm2-lvmetad service and replace use_lvmetad = 1 in both EC2 instances.

[ec2-user@ma-host-1 ~]$ sudo lvmconf --enable-halvm --services --startstopservices 
[ec2-user@ma-host-2 ~]$ sudo lvmconf --enable-halvm --services --startstopservices

3. Edit /etc/lvm/lvm.conf file on both EC2 instances and add the list of volume groups which are not part of cluster storage. This tells LVM not to activate cluster Volume Group (VG) during the system start-up.

[ec2-user@ma-host-1 ~]$ sudo vi /etc/lvm/lvm.conf 
[ec2-user@ma-host-2 ~]$ sudo vi /etc/lvm/lvm.conf

In this example, since there are no other volumes, we need to add an empty list:

volume_list = []

4. Rebuild the initramfs boot image and reboot both instances. Once the command get executed, the OS will not try to activate the VG controlled by the cluster.

[ec2-user@ma-host-1 ~]$ sudo dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r) 
[ec2-user@ma-host-1 ~]$ sudo reboot 
[ec2-user@ma-host-2 ~]$ sudo dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r) 
[ec2-user@ma-host-2 ~]$ sudo reboot

Step 4: Configure MariaDB cluster resources and connect with the MariaDB server

1. As a prerequisite, both instances need to be installed with AWS client and configured with an access key, secret key, and default Region.

[ec2-user@ma-host-1 ~]$ sudo pip3 install awscli 
[ec2-user@ma-host-2 ~]$ sudo pip3 install awscli 
[ec2-user@ma-host-1 ~]$ sudo aws configure 
AWS Access Key ID [None]: <ACCESS_KEY_ID> 
AWS Secret Access Key [None]: <SECRET_ACCESS_KEY> 
Default region name [None]: <REGION> 
Default output format [None]: 
[ec2-user@ma-host-2 ~]$ sudo aws configure

For more details, please refer to the user guides:

2. Now start the cluster service on one of the instances and configure MariaDB resources.

[ec2-user@ma-host-1 ~]$ sudo pcs cluster start --all 
ma-host-1: Starting Cluster (corosync)... 
ma-host-2: Starting Cluster (corosync)... 
ma-host-2: Starting Cluster (pacemaker)... 
ma-host-1: Starting Cluster (pacemaker)... 
[ec2-user@ma-host-1 ~]$ sudo pcs resource create mariadb-lvm-res LVM volgrpname="clustervg" 
exclusive=true --group mariadb-group 
Assumed agent name 'ocf:heartbeat:LVM' (deduced from 'LVM') 
[ec2-user@ma-host-1 ~]$ sudo pcs resource create mariadb-fs-res Filesystem  device="/dev/clustervg/lv1" directory="/var/lib/mysql" fstype="xfs" --group mariadb-group 
Assumed agent name 'ocf:heartbeat:Filesystem' (deduced from 'Filesystem')

3. Configure a secondary IP for one of the instances (ma-host-1) in the AWS EC2 Console.

From the Amazon EC2 console, select the instance ma-host-1, Click on the 'Actions' menu, select 'Networking' option followed by "Manage IP addresses" option.

Then configure it as the virtual IP in the cluster. This IP will be used by the users to connect with the database server.

Once you select the "Manage IP addresses" option for ma-host-1 instance, assign the secondary IP.

[ec2-user@ma-host-1 ~]$ sudo pcs resource create privip awsvip secondary_private_ip=172.31.4.69  --group mariadb-group 
Assumed agent name 'ocf:heartbeat:awsvip' (deduced from 'awsvip') 
[ec2-user@ma-host-1 ~]$ sudo pcs resource create MARIADB-VIP ocf:heartbeat:IPaddr2 ip=172.31.4.69 nic="eth0" cidr_netmask=24 op monitor interval=30s --group mariadb-group 
[ec2-user@ma-host-1 ~]$ sudo pcs resource create mariadb-server-res ocf:heartbeat:mysql binary="/usr/bin/mysqld_safe" config="/etc/my.cnf" datadir="/var/lib/mysql" pid="/var/lib/mysql/run/mariadb.pid" socket="/var/lib/mysql/mysql.sock" additional_parameters="--bind-address=0.0.0.0" op start timeout=60s op stop timeout=60s op monitor interval=20s timeout=30s --group mariadb-group

4. Configure the ordering of cluster resources, verify them and you shall see something like the following:

[ec2-user@ma-host-1 ~]$ sudo pcs constraint order start mariadb-lvm-res then mariadb-fs-res 
Adding mariadb-lvm-res mariadb-fs-res (kind: Mandatory) (Options: first-action=start then-action=start) 
[ec2-user@ma-host-1 ~]$ sudo pcs constraint order start mariadb-fs-res then privip 
Adding mariadb-fs-res privip (kind: Mandatory) (Options: first-action=start then-action=start) 
[ec2-user@ma-host-1 ~]$ sudo pcs constraint order start privip then MARIADB-VIP 
Adding privip MARIADB-VIP (kind: Mandatory) (Options: first-action=start then-action=start) 
[ec2-user@ma-host-1 ~]$ sudo pcs constraint order start MARIADB-VIP then mariadb-server-res 
Adding MARIADB-VIP mariadb-server-res (kind: Mandatory) (Options: first-action=start then-action=start) 
[ec2-user@ma-host-1 ~]$ sudo pcs constraint list 
Location Constraints: 
Ordering Constraints:
 start mariadb-lvm-res then start mariadb-fs-res (kind:Mandatory)
 start mariadb-fs-res then start privip (kind:Mandatory)
 start privip then start MARIADB-VIP (kind:Mandatory)
 start MARIADB-VIP then start mariadb-server-res (kind:Mandatory) 
Colocation Constraints: 
Ticket Constraints:

5. Check if the MySQL port (3306) is open.

[ec2-user@ma-host-1 ~]$ netstat -ntlup 
(No info could be read for "-p": geteuid()=1000 but you should be root.) 
Active Internet connections (only servers) 
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name 
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      - 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      - 
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      - 
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      - 
----

6. Verify the status of the cluster, the cluster resources you have created so far and you shall see something like the following:

[ec2-user@ma-host-1 ~]$ sudo pcs status 
Cluster name: macluster 
Stack: corosync 
Current DC: ma-host-2 (version 1.1.23-1.amzn2.1-9acf116022) - partition with quorum 
Last updated: Sun May 23 07:42:29 2021 
Last change: Sun May 23 07:18:27 2021 by root via cibadmin on ma-host-2 
2 nodes configured 
6 resource instances configured 
Online: [ ma-host-1 ma-host-2 ] 
Full list of resources: 
clusterfence      (stonith:fence_aws):         Started ma-host-1
 Resource Group: mariadb-group
     mariadb-lvm-res         (ocf::heartbeat:LVM):        Started ma-host-1
     mariadb-fs-res (ocf::heartbeat:Filesystem): Started ma-host-1
     privip        (ocf::heartbeat:awsvip):     Started ma-host-1
     MARIADB-VIP   (ocf::heartbeat:IPaddr2):    Started ma-host-1
     mariadb-server-res      (ocf::heartbeat:mysql):      Started ma-host-1 
Daemon Status:
 corosync: active/enabled
 pacemaker: active/enabled
 pcsd: active/enabled 
[ec2-user@ma-host-1 ~]$ sudo crm_verify -L -V

7. Connect with the MariaDB server using the secondary IP or VIP configured from both instances to make sure that you are able to access the database server.

[ec2-user@ma-host-1 ~]$ sudo mysql -h172.31.7.143 -uroot -p 
[ec2-user@ma-host-2 ~]$ sudo mysql -h172.31.7.143 -uroot -p

Step 5: Create database schema and tables

1. Create a test schema and table. Insert a few rows into the table for testing purposes.

MariaDB [(none)]> create database CMS; 
Query OK, 1 row affected (0.00 sec) 
MariaDB [(none)]> use CMS; 
Database changed 
MariaDB [CMS]> create table courses( course_id INT NOT NULL AUTO_INCREMENT, course_title VARCHAR(100) NOT NULL, course_author VARCHAR(40) NOT NULL, submission_date DATE, PRIMARY KEY ( course_id )); 
Query OK, 0 rows affected (0.01 sec) 
MariaDB [CMS]> INSERT INTO courses(course_title, course_author, submission_date) VALUES ("Head First Java, 2nd Edition", "Kathy Sierra", NOW()); 
Query OK, 1 row affected, 1 warning (0.00 sec) 
MariaDB [CMS]> INSERT INTO courses (course_title, course_author, submission_date) VALUES ("Cracking the Coding Interview: 189 Programming Questions and Solutions", "Gayle Laakmann McDowell ", NOW()); 
Query OK, 1 row affected, 1 warning (0.00 sec)

Step 6: Provision another EC2 instance, install MySQL client, and connect using VIP

1. Launch another EC2 instance to use as a client to connect to the MySQL server and make sure you are able to access it via SSH. Refer to the Tutorial: Get started with Amazon EC2 Linux instances for more details. After accessing the instance, install the MySQL client software package to access the MariaDB server.

[ec2-user@ma-host-1 ~]$ sudo yum install -y https://dev.mysql.com/get/mysql57-community-release-el7-11.noarch.rpm 
[ec2-user@ma-host-2 ~]$ sudo yum install -y mysql-community-client 
[ec2-user@ip-172-31-15-121 ~]$ mysql -h172.31.7.143 -uroot -p 
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g. 
Your MySQL connection id is 7 
Server version: 5.5.68-MariaDB MariaDB Server 
Copyright (c) 2000, 2021, Oracle and/or its affiliates. 
Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. 
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. 
mysql> use CMS; 
Reading table information for completion of table and column names 
You can turn off this feature to get a quicker startup with -A Database changed 
mysql> select * from courses; 
+-----------+------------------------------------------------------------------------+--------------------------+-----------------+ 
| course_id | course_title                                                           | course_author            | submission_date | 
+-----------+------------------------------------------------------------------------+--------------------------+-----------------+ 
|         1 | Head First Java, 2nd Edition                                           | Kathy Sierra             | 2021-05-23      | 
|         2 | Cracking the Coding Interview: 189 Programming Questions and Solutions | Gayle Laakmann McDowell  | 2021-05-23      | 
+-----------+------------------------------------------------------------------------+--------------------------+-----------------+ 
2 rows in set (0.00 sec)

Step 7: Test failover by making one of the nodes to standby in your cluster

1. Verify the cluster status to identify where the cluster resources are running.

[ec2-user@ma-host-1 ~]$ sudo crm_mon -r1 
Stack: corosync 
Current DC: ma-host-2 (version 1.1.23-1.amzn2.1-9acf116022) - partition with quorum 
Last updated: Sun May 23 16:22:13 2021 
Last change: Sun May 23 07:18:27 2021 by root via cibadmin on ma-host-2 
2 nodes configured 
6 resource instances configured 
Online: [ ma-host-1 ma-host-2 ] 

2. From the above status, it’s clear that the cluster resources are running in ma-host-2. Make it standby and verify the cluster status again.

[ec2-user@ma-host-1 ~]$ sudo pcs cluster standby ma-host-2 
[ec2-user@ma-host-1 ~]$ sudo crm_mon -r1 
Stack: corosync 
Current DC: ma-host-2 (version 1.1.23-1.amzn2.1-9acf116022) - partition with quorum 
Last updated: Sun May 23 16:31:41 2021 
Last change: Sun May 23 16:30:52 2021 by root via cibadmin on ma-host-1 
2 nodes configured 
6 resource instances configured 
Node ma-host-2: standby 
Online: [ ma-host-1 ]

3. It’s clear that the cluster resources are switched to run on ma-host-1 in no time. Now, execute the MySQL commands in the MySQL client instance where the connection was already established (in the previous step) with the MariaDB server and see if the server is available to accept the requests.

mysql> INSERT INTO courses(course_title, course_author, submission_date) VALUES ("Building Microservices", "Sam Newman", NOW()); 
Query OK, 1 row affected, 1 warning (0.00 sec) 
mysql> select * from courses; 
+-----------+------------------------------------------------------------------------+--------------------------+-----------------+ 
| course_id | course_title                                                           | course_author            | submission_date | 
+-----------+------------------------------------------------------------------------+--------------------------+-----------------+ 
|         1 | Head First Java, 2nd Edition                                           | Kathy Sierra             | 2021-05-23      | 
|         2 | Cracking the Coding Interview: 189 Programming Questions and Solutions | Gayle Laakmann McDowell  | 2021-05-23      | 
|         3 | Building Microservices                                                 | Sam Newman               | 2021-05-23      | 
+-----------+------------------------------------------------------------------------+--------------------------+-----------------+ 
3 rows in set (0.00 sec)

4. It’s verified that the failover happened and no database service interruption is observed by the users as the switch over happened immediately from ma-host-2 to ma-host-1.

Resiliency Considerations

EBS volumes are designed to be highly available, reliable, and durable and the volume data is replicated across multiple servers in an AZ to prevent the loss of data from the failure of any single component. At the same time, to protect against the unlikely event of AZ level failure, there are two recommended approaches to protect the MariaDB data depending on your RTO and RPO.

Application-level Backup

With application-level backup architectures, you create backups at the application layer, managing databases and tables. For MariaDB, you have many options such as Percona Xtrabackup, MySQLdump, mariabackup, an open-source tool for performing online backup of your data and more. Scheduling the backup will be dependent on what you require for business continuity. The target for these backups could be another EBS volume, an S3 bucket, or an EFS shared directory.

Volume-level Backup

Volume-level backups are performed by copying and tracking changes to your chosen EBS volumes. Amazon EBS Snapshots managed by Amazon Data Lifecycle Manager are an easy way to automate this process. These snapshots are saved to Amazon Simple Storage Service (Amazon S3) for long-term retention with 99.999999999% durability, ensuring higher availability of your EBS Snapshots. You can also use Data Lifecycle Manager policies to copy critical data into another region.

Cleaning up

Once you are done with the testing, remember to terminate the EC2 instances and delete the EBS volume. If you have any important data worth saving, make sure to take a backup before deleting the volumes.

Conclusion

In this blog, we showed you how to set up a self-managed MariaDB with improved availability using the Amazon EBS Multi-Attach feature. We used cluster software including: Pacemaker, Corosync, and AWS resource agents for I/O fencing to ensure data protection of concurrent access from multiple nodes. For customers who have their MySQL/ MariaDB database server on EC2 with data on EBS volumes in a single AZ or have concerns around instance failures affecting the availability of their workloads, they can achieve higher availability of their database server by using the shared EBS volume with Multi-Attach and the automated recovery solution described in this post. Note that the MariaDB server setup on the shared volume needs detailed planning and testing based on several factors unique to every environment.

With AWS, you will always have database freedom, with offerings ranging from managed databases with Amazon RDS, managed analytics options for data at any scale, to deploying databases from our Amazon Partner Network and developing your own self-managed databases.

Please read the following references for more information:

Thank you for reading this blog. If you have any comments or questions, don’t hesitate to leave them in the comments section.

Kayalvizhi Kandasamy

Kayalvizhi Kandasamy

Kayalvizhi Kandasamy works with digital native companies to support their innovation. As a Senior Solutions Architect (APAC) at Amazon Web Services (AWS), she leverages her experience to help people bring their ideas to life, focusing primarily on micro-services architectures and cloud native solutions leveraging AWS products and services. Outside of work, she likes playing chess, she is a FIDE rated chess player, she also coaches her daughters the art of playing chess, and prepares them for various chess tournaments.

Ryan Sayre

Ryan Sayre

Ryan Sayre is a service-aligned worldwide Senior Storage Specialist focused on EBS for AWS based in Portland, Oregon. He works with AWS customers to help them choose and manage highly resilient and performant data storage for their workloads at scale. He enjoys sharing new technologies and building best-of-breed technology solutions.