AWS Database Blog

Upgrade your Amazon RDS for PostgreSQL or Amazon Aurora PostgreSQL database, Part 1: Comparing upgrade approaches

If you need to upgrade your Amazon Relational Database Service (Amazon RDS) for PostgreSQL or Amazon Aurora PostgreSQL-Compatible Edition database to a newer version, you can choose from in-place and out-of-place upgrade options. You may prefer either in-place upgrades (where you don’t have to create a new DB instance) or out-of-place upgrades depending on the downtime your application can afford.

With in-place upgrades, the application continues to connect to the same database endpoint that it was connecting to prior to the upgrade. However, the database becomes unavailable for a longer duration when the upgrade happens. In contrast, in an out-of-place upgrade, the endpoint changes after the upgrade. But the downtime needed for the out-of-place upgrade is considerably less because the database is copied over to new database instance and synced up until the cutover happens.

In this post, we discuss in-place upgrades and several out-of-place upgrade options, and weigh their pros and cons. In Part 2 of this post, we go into more detail about how to perform an upgrade using the pglocial extension with minimum downtime.

Option A: In-place upgrade

Amazon RDS for PostgreSQL allows you to upgrade your database from 9.6 all the way to 13 in one action. It eases the administrative tasks involved. Because this is available as part of the managed service, AWS takes care of the heavy lifting and makes multi-version upgrades seamless.

However, this approach involves downtime. The higher the database size, the longer the downtime to upgrade the database. If you can afford the downtime, this is a good approach because you don’t have to worry about provisioning additional instances or configuring the database with additional extensions for upgrade. The in-place upgrade uses the pg_upgrade method. To learn more about major version upgrades, see How to perform a major version upgrade.

Option B: Snapshot approach

One of the easiest paths for database upgrades is by using snapshots. This method works fine, but has downtime while you upgrade the snapshots. If your application can’t tolerate that downtime, the snapshot upgrade method isn’t the right solution because any change to the production database after the snapshot is taken isn’t carried forward to the newly upgraded database.

You can use the snapshot approach when restoring a database for compliance reasons, where you have to upgrade the old database snapshot to the version that is currently supported. Another scenario is when you want to quickly check the application’s compatibility with a new version by upgrading the snapshot and testing your application against the upgraded database.

For more information about the snapshot approach, see Upgrading a PostgreSQL DB snapshot engine version.

Steps to upgrade using the snapshot approach

The snapshot method includes the following high-level steps:

  1. Shut down or stop the application from making database changes.
  2. Create a manual snapshot at the time of upgrade.
  3. Upgrade the snapshot to the desired version.
  4. Restore the upgraded snapshot to the new database.
  5. Update the configuration or switch the connections to the new database.
  6. Start the application to use the new database.

Benefits of the snapshot approach

The advantage of using the snapshot method is that no data movement or data copy is happening using any external service, which means the data stays the same on the new database. The downside of this approach is that you experience downtime throughout the upgrade process. The downtime is required in order to guarantee data consistency from the old to the new database. Also, the upgraded snapshot is stored in Amazon Simple Storage Service (Amazon S3) and is restored in a lazy-loading fashion, which takes longer to complete.

Option C: AWS DMS approach

Another approach is to use AWS Database Migration Service (AWS DMS), which can do live migration with minimum downtime. However, AWS DMS has some shortcomings while migrating special data types. For example, PostgreSQL supports the timestamp with time zone data type, which AWS DMS doesn’t support (at the time of this writing).

For more information about upgrading with AWS DMS, see Achieving minimum downtime for major version upgrades in Amazon Aurora PostgreSQL using AWS DMS.

Steps to upgrade using AWS DMS

To upgrade using AWS DMS, complete the following high-level steps:

  1. Provision an AWS DMS instance in the same Region as the database.
  2. Create a new target database in the desired database version.
  3. Enable data at-rest encryption on the target database.
  4. Create connection endpoints in AWS DMS for the source and target databases.
  5. Create AWS DMS tasks to replicate the incremental data using change data capture (CDC).
  6. Enable data encryption to protect in-transit data between AWS DMS and the endpoints.
  7. Start the AWS DMS tasks to sync the data between the source and target databases to ensure the initial data copy and incremental copy using CDC runs successfully.
  8. Configure the application to point to the target database.
  9. Restart the application to use the new (target) database.
  10. Stop the data replication tasks in AWS DMS after making sure the databases are in sync.
  11. Set up the reverse data migration to the old database, which can serve as a fallback plan.
  12. Stop the AWS DMS instances at a later stage if the cutover was successful.

Benefits of the AWS DMS approach

The advantage in this approach is less downtime, because AWS DMS syncs the data continuously. You can cut over the application and the target database is caught up with the source. However, this approach has the following limitations:

  • It requires schema validation including data types and the data
  • AWS DMS doesn’t support certain data types, like timestamp with time zone.
  • You have to pay for AWS DMS for the duration it takes this upgrade project to complete.

For more information about best practices and limitations, see Using a PostgreSQL database as an AWS DMS source.

To overcome these challenges and minimize downtime, you can use replication-based approach.

Option D: Replication-based approach using pglogical

In this replication-based approach, we replicate data using the pglogical extension, which has a shorter downtime among many other benefits. The downtime is just how long it takes to point the application to the new database.

When discussing this approach, we use the following terms:

  • Nodes – PostgreSQL DB instances
  • Provider node – Source PostgreSQL database
  • Subscriber node – Target PostgreSQL database
  • Replication slot – A PostgreSQL feature to retain WAL files needed by replicas
  • Replication set – A collection of tables

Limitations of the pglogical extension

Before considering pglogical, you should be aware of some of the limitations of logical replication:

  • Tables must have primary key or unique key for replicating UPDATE and DELETE changes.
  • If you can’t add a primary key to a table, you can replicate only the INSERT changes, using the REPLICA IDENTITY feature.
  • TRUNCATE table operations aren’t replicated.
  • DDL changes aren’t replicated automatically.
  • pglogical provides the pglogical.replicate_ddl_command function to allow DDL to run on the provider and subscriber at a consistent point.
  • The state of sequences added to replication sets is replicated periodically and not in real time.
  • Large objects aren’t replicated. However, bytea data is replicated.
  • Tables must have the same fully qualified name between the publication and subscription.
  • Column data types and column names must match between the publication and subscription.
  • CHECK constraints, NOT NULL constraints, and so on must be the same or weaker on the subscriber than the provider.
  • On the target database, unique constraints can’t be DEFERRABLE.
  • Foreign keys constraints aren’t enforced for the replication process.
  • pglogical doesn’t support replication between databases with different encoding. I recommend using UTF-8 encoding in all replicated databases.
  • pglogical can replicate across PostgreSQL major versions. Despite that, long-term cross-version replication is not considered a design target, though it may often work.
  • The state of sequences added to replication sets is replicated periodically and not in real time.

Prerequisites

To complete the PostgreSQL upgrade using pglogical, you must complete the following prerequisites:

  1. Create and activate an AWS account.
  2. Make sure you have appropriate privileges to enable the pglogical extension on the RDS for PostgreSQL instance.
  3. Have an RDS for PostgreSQL instance version 9.6.11 or higher.
  4. Enable the pglogical extension on both the provider (source) and subscriber (target) nodes.
  5. Make sure the tables on the provider and subscriber have the same names, are in the same schema, and have the same columns with the same data types in each column.

Make sure the source database is at version 9.6.11 or higher. In Amazon RDS for PostgreSQL, the pglogical extension isn’t supported any version below this. If you’re on a lower version, you have to perform a minor version upgrade to 9.6.11 prior to upgrading the database using pglogical extension.

Steps to upgrade using pglogical

To upgrade using the pglogical extension, complete the following high-level steps:

  1. Create the target RDS for PostgreSQL database in the desired higher version.
  2. Configure the pglogical extension in Amazon RDS for PostgreSQL in the source and target databases.
  3. Configure the WAL parameters in both the source and target RDS for PostgreSQL databases.
  4. Create the subscriber and provider nodes.
  5. Create a replication set and add the schema or tables on the source database.
  6. Create a subscription on the target database.
  7. Sync the tables to the target database.
  8. Cut over the application to the new target database and stop the old database.

The following diagram illustrates this solution.

We highly recommend testing this upgrade process in non-production DB instances before you upgrade your production database.

In this post, we demonstrate this approach in Amazon RDS for PostgreSQL, but you can also use Amazon Aurora PostgreSQL-Compatible Edition.

Benefits of the replication-based approach

A replication set is a mechanism to control which tables in the database are replicated and which actions are replicated.

Each replication set can specify individually if INSERT, UPDATE, DELETE, and TRUNCATE operations on the set are replicated. A table can be in multiple replication sets, and every subscriber can subscribe to more than one replication set. The resulting set of tables and actions replicated is the union of the sets the table is in. The tables are replicated only after they’re added into a replication set.

The following are the predefined replication sets named in the pglogical extension:

  • default – Replicates all changes to tables in it
  • default_insert_only – Replicates INSERT operations and is meant for tables that don’t have a primary key
  • ddl_sql – Replicates schema changes specified by pglogical.replicate_ddl_command

Considerations

At times, you may be upgrading production databases that have read replicas. In such a scenario, after the upgrade, you have to create new read replica instances on the new database instance. The existing read replica instances along with the writer instance become obsolete after the upgrade. Remember, you experience some performance impact on the production writer DB instance and potential delay to the read replicas. This is because the production writer instance is already having replication overhead to keep up the replicas, and pglogical adds more overhead to the production databases. You should test the upgrade in non-production environments with read replicas and measure the performance impact of the writer database instance.

In our testing with pglogical replication using a read replica versus no read replicas in the same AWS region, we observed 5% CPU overhead on the source, and write latency of 63 milliseconds versus 3 milliseconds latency, network throughput of 1 MB/second versus 2 MB/second, and queue length of 0.048 versus 0.011 with no read replica.

If you have any use case in which you can’t add a primary key or unique key constraint to the table, you can use the REPLICA IDENTITY feature, which replicates the INSERT only changes. You need to create a replication set of default_insert_only type and add the tables without the primary key to this replication set.

Conclusion

In this post, we shared the pros and cons of various approaches to upgrade an RDS for PostgreSQL database, including high-level steps to implement these methods. In Part 2, we walk through the step-by-step instructions to upgrade an RDS for PostgreSQL database using the pglogical extension with minimum downtime.


About the Author

Arumugam Petchimuthu (aru) is a Senior Solutions Architect in AWS specializing on Databases. Aru has 25 years of IT experience in the Information Technology industry. Aru holds a master’s degree in Computer and Software Applications and Certified professional in Relational and NoSQL database and cloud technologies including AWS, Oracle, and Cassandra.