AWS Database Blog

Migrating a commercial database to open source with AWS SCT and AWS DMS

You’re moving your applications to the AWS Cloud and you want to migrate from a commercial database engine to an open source database. One thought that may have rightfully crossed your mind is that changing database engines is not a simple task. Rather, it can be a complex, multi-step process that involves pre-migration assessments, converting database schema and code, data migration, functional testing, performance tuning, and many other steps.

However, two fundamental steps in this process that require the most effort are converting the schema and database code objects, and migrating the data itself.

Fortunately, AWS provides you with the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS), which help with each step.

If you’ve never used these products, you may be wondering how they work. Moreover, you may prefer to test them on a sample database to get some experience under your belt before migrating your application databases.

In this post, I introduce you to a series of step-by-step guides for migrating Oracle and Microsoft SQL Server databases to open-source engines like MySQL and PostgreSQL using the AWS SCT and AWS DMS. I also provide an AWS CloudFormation template that you can use to provision the required resources in your own account and follow along to migrate a sample database.

Converting schema and code objects

Converting the database schema and code objects is usually the most time-consuming operation in a heterogeneous database migration. By converting the schema properly, you can achieve a major milestone of the migration.

The AWS SCT is an easy-to-use application that you can install on a local computer or an Amazon Elastic Compute Cloud (Amazon EC2) instance. It helps simplify heterogeneous database migrations by examining your source database schema and automatically converting the majority of the database code objects, including views, stored procedures, and functions, to a format compatible with your new target database. Any objects that the AWS SCT can’t convert automatically are marked with detailed information that you can use to convert it manually.

Any objects that the AWS SCT can’t convert automatically are marked with detailed information that you can use to convert it manually.

Migrating the data

After you complete the schema conversion, you need to move the data itself. In case of production databases, you may not be able to afford any downtime during the migration. Moreover, you may want to keep the transactions from source and target database in sync until you switch your application to the new target.

AWS DMS helps you migrate the data from the source database to the target database easily and securely. AWS DMS supports data migration to and from most widely used commercial and open-source databases. The source database can be located in your on-premises environment, running on an EC2 instance, or it can be an Amazon Relational Database Service (Amazon RDS) database. The target can be a database in Amazon EC2 or Amazon RDS. Additionally, the source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.

Additionally, the source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.

Migration process

To help you get started with the AWS SCT and AWS DMS, I’ve created a series of step-by-step guides that walk you through the entire migration process for Oracle to Amazon Aurora PostgreSQL, and Microsoft SQL Server to Amazon Aurora MySQL. The steps that we perform to migrate these databases are similar:

  1. Install and configure the AWS SCT.
  2. Create the schema in the target database.
  3. Drop foreign keys and secondary indexes on the target database, and disable triggers.
  4. Set up an AWS DMS task to replicate your data—full load and change data capture (CDC).
  5. Stop the task when the full load phase is complete.
  6. Inspect the target database for migrated data.
  7. Enable constraints such as foreign keys and secondary indexes in the target database.
  8. Enable an AWS DMS task for ongoing replication to keep the source and the target database in sync.

Prerequisites

To follow along with the guides, you need to have an AWS account. Create and activate an AWS account if you don’t have one already. You also need to have an Amazon EC2 key pair in the Region to deploy the resources. Create an Amazon EC2 key pair if you don’t have one.

Creating a stack

We use a CloudFormation template to set up the environment, including the source and target databases, networking configurations, and the tools we use. This way, we can focus on the steps dedicated to the migration.

For instructions on launching the stack and configuring your environment, see Environment Configuration on GitHub.

Keep in mind that some resources incur costs as long as the resources are in use. At the end of the migration, delete the AWS DMS resources and tear down the CloudFormation stack.

To get an idea of the scale of the cost, I’ve provided a small pricing example (accurate as of January 2021) that assumes you have the stack for a duration of 5 hours (sufficient to complete the walkthrough) in us-east-1 (N. Virginia) region.

Service Amount Consumed Pricing Structure Total Cost
Amazon EC2 1 x m5a.xlarge Windows with SQL Server Standard (License Included) for 5 hours $0.836 per hour $4.18
Amazon EBS 1 x 250 GB with 2000 IOPS (io1) for 5 hours $0.125 per GB-month of provisioned storage AND $0.065 per provisioned IOPS-month $1.12
Amazon Aurora 1 x db.r5.xlarge with 20 GB of storage and an estimated 1M IO requests for 5 hours $0.58 per hour for on demand instance AND $0.10 per GB-month of storage with $0.20 per 1 million requests $4.92
Amazon RDS (Oracle Migration Only) 1 x db.r5.xlarge (BYOL) with 100 GB of GP2 storage for 5 hours $0.466 per hour for on demand instance AND $0.115 USD per GB-month of storage $2.41
Amazon DMS 1 x c4.xlarge with 100 GB of storage for 5 hours $0.238 per hour $1.19
Amazon SCT N/A Free $0
Total Cost $13.82

Step-by-step guides

To keep this post short, I host the instructions of the migration process on GitHub. Use the following links to find the appropriate guide for the database engine that you want to migrate:

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud. It combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open-source databases.

Although I use Aurora as my target database in this post, you can follow similar instructions to migrate your source database to a MySQL or a PostgreSQL database on Amazon RDS, or a database that you manage on an EC2 instance.

Deleting the resources

When your migration is complete, you need to delete the AWS DMS resource and the CloudFormation stack to stop incurring chargers. For instructions, see Environment Cleanup on GitHub.

Conclusion

Database migration can be a difficult process, but products like the AWS SCT and AWS DMS mitigate some of that complexity. The step-by-step guides that I provide in this post help you get hands-on experience migrating Oracle and Microsoft SQL Server to Aurora PostgreSQL and Aurora MySQL using the AWS SCT and AWS DMS. You can follow similar steps to migrate to and from other supported database engines.

Further reading

I recommend that you look through the following resources as you plan your migration:

Thanks for reading and have a successful database migration!


About the Author

Hooman Hamilton Hooman Hamilton is a Solutions Architect with Amazon Web Services. He works with customers to build scalable, highly available, and secure solutions in the AWS Cloud.