Containers

Red Hat OpenShift Service on AWS: architecture and networking

Updated on August 3rd to include ROSA private clusters with PrivateLink.

It’s not new to see customers migrate OpenShift workloads to AWS to take advantage of the cloud, as well as the portfolio of AWS native services to compliment the application workloads running in OpenShift. Recently, however, there has been a noted shift of customers moving to managed services, thus, we are starting to see customers migrate from self-managed OpenShift Container Platform (OCP) to the recently generally available Red Hat OpenShift service on AWS (ROSA) in order to take advantage of a managed OpenShift cluster so customers can focus resources where needed for their business.

During these migrations, I find there is often discussion with application platform, infrastructure, cloud, networking, and security teams around what specific resources are created during the provision in the ROSA service and how would these fit into any existing architectures the customer may have.

In this post, I explore the AWS and OpenShift resources and components, focusing on where these are placed, what are the implementation differences when deploying single vs multi-Availability Zone clusters, and differences between public and private clusters. Considerations for networking specifically VPC address spaces, and considerations for having the ROSA deployment process build out the VPC for you or if you should consider creating the VPC yourself and deploying into an existing VPC.

Common deployment components:

All ROSA implementations will have three Master nodes in order to cater for cluster quorum and to ensure proper fail-over and resilience of OpenShift. At least two infrastructure nodes to ensure resilience of the OpenShift router layer, which provides end usee application access. A collection of AWS Elastic Load Balancers, some of these Load balancers will provide end user access to the application workloads running on OpenShift via the OpenShift router layer, other AWS elastic load balancers will expose end points used for cluster administration and management by the SRE teams.

The OpenShift Master nodes cater for API end point for cluster administration and management, Controllers, and etcd.
The OpenShift infrastructure nodes cater for build in OpenShift container registry, OpenShift router layer, and monitoring.

ROSA clusters will require AWS VPC subnets per Availability Zone (AZ). For single AZ implementations two subnets will be required ( one public one private) for multi AZ implementations six subnets will be needed (one public and one private per AZ), for private clusters with private link 3 private subnets will be required.

Default deployment (single AZ)

rosa create cluster <cluster name>

The default cluster config will deploy a basic ROSA cluster into a single AZ. This will create a new VPC with two subnets (one public and one private) within the same AZ. The OpenShift control plane and data plane i.e Masters, infrastructure, and Workers will all be placed into the same AZ in the private subnet.

This is the simplest implementation and a good way to start playing with ROSA from a developer point of view. This implementation is not recommended for scale, resilience, or production.

 

Multi-AZ cluster

rosa create cluster
or
rosa create cluster -- interactive
or
rosa create cluster --cluster-name testcluster --multi-az --region us-west-2 --version 4.7.3 --enable-autoscaling --min-replicas 3 --max-replicas 3 --compute-machine-type m5.xlarge --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/24 --host-prefix 23

The multi-AZ implementation will make use of three Availability Zones, with a public and private subnet in each AZ (total of six subnets). If we’re not deploying into an existing VPC, the ROSA provisioning process will create a VPC to meets these requirements.

Multi-AZ implementations will deploy three Master nodes and three infrastructure nodes spread across three AZs. This takes advantage of the resilience constructs of the multi-AZ VPC design and combines it with the resilience model of OpenShift.

Assuming that application workloads will also be running in all three AZs for resilience, this will translate to a minimum of nine EC2 instances running within the customer account. Three Master nodes, three infrastructure nodes, and assuming at least one Worker in each AZ, three Workers. This is different from the other AWS container orchestration options such as Amazon Elastic Container Services (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS) where the control plane does not exist with the customer account.

Subnet sizing and address spaces:

When deploying ROSA there are three IP address CIDRs that warrant discussion:

Machine CIDR 10.0.0.0/16
This is the IP address space for the AWS VPC, either existing or to be created. If deploying into an existing VPC ensure that this is instead to reflect the VPC CIDR of the VPC being deployed into. If not deploying into an existing VPC the six subnets created will be the same size, equal divisions of the VPC or machine CIDR. It should be noted that there’s not a large number of resources within the public subnets, mainly load balancers and NAT gateway interfaces.

Service CIDR and POD CIDR
The Service and POD CIDRs are private address spaces internal to OpenShift and are used by the SDN. You can deploy multiple ROSA clusters and reuse these address spaces as they sit behind the routing layer within OpenShift and will not interfere with the same address space on other clusters. This is similar to private IP use in residential homes, every WIFI customer has a 10.0.0.0/16.

It should be noted that if the application workloads need to reach data sources and other services outside of OpenShift, the target address space should not overlap these address spaces. This will result in routing issues internal to OpenShift. For example if an application workload running in ROSA needs to access a database running in an on-premises location, the on-premises address space should not overlap Host prefix 23 – 26. The Host prefix has nothing to do with the AWS VPC. This takes the above POD CIDR and defines how this is divided across all of the underlying container hosts or Worker nodes. This will be a consideration linked to how many and how large are the instance types for the Worker nodes.

Here is a simple web app that will allow you to explore the impact of adjusting the sizing of these network CIDRs:
https://example-wgordon.apps.osd4-demo.u6k6.p1.openshiftapps.com/

Deploying ROSA into an existing VPC

Customers looking for more granular control of subnet address space sizing should consider creating the VPC and then deploying ROSA into an existing VPC. Customers who have business unit segregation where the platform owners who would be responsible for OpenShift are a different team from the infrastructure or cloud team deploying into an existing VPC may be ideal.

When deploying ROSA into an existing VPC, the installer will prompt for subnets to install into. The installer will require six subnets (three public and three private). The installer at this stage simply allows you to select subnets from a list of subnet IDs. It would be helpful if you document the subnet IDs being deployed into.

Public, private, and PrivateLink ROSA clusters

There are three implementations to compare, ROSA public vs private clusters and then private clusters with AWS PrivateLink. Public and private ROSA clusters refer to where the application workloads running on OpenShift will be accessible from.

Selecting a public cluster will create a Classic Load Balancer, which provides access to port 80 and 443 which is internet facing, and can be accessed from within the VPC or via peering, AWS Direct Connect, or transit gateway as well as from the public internet.

Selecting a private cluster will create a Classic Load Balancer, which provides access to port 80 and 443 which is internet facing, and can be accessed from within the VPC or via peering, AWS Direct Connect, or transit gateway.

This internal Load balancer has the infrastructure nodes as targets and will forward to the OpenShift router layer. There is no public or internet facing load balancer so application workloads can not be accessed from the internet.

Private clusters will still require public subnets, which in turn, will require an IGW, public route table, and route to the internet via the IGW. This is required for the provisioning process to create the public facing AWS Network Load Balancers that will provide access to the cluster for administrative and management by Red Hat SRE.

The only difference between selecting public vs private is that the classic load balancer for apps is internet facing for public and internal for private.

In June 2021, ROSA private clusters with AWS PrivateLink were released.

ROSA private clusters with PrivateLink are completely private. Red Hat SRE teams will make use of private link endpoints to access the cluster for management; no public subnets, route tables, or IGW are required.

ROSA private cluster

 

ROSA public cluster

 

ROSA private cluster with Private-Link

rosa create cluster --private-link --cluster-name rosaprivatelink --multi-az --region us-west-2 --version 4.8.2 --enable-autoscaling --min-replicas 3 --max-replicas 3 --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 —subnet-ids subnet-0a34c58efd2687955,subnet-0f8d8e6e213c5ba15,subnet-04e7a92df201ee976

ROSA Private link clusters can only be deployed into an existing VPC.

Typically ROSA private clusters with PrivateLink will be implemented as part of a greater transit gateway implementation where as the VPC for ROSA will not have internet access. Traffic will flow from the ROSA VPC to either on premises or another VPC or AWS account that provides an a single controlled point of egress.

Connection flow:

Customer/application consumer connection flow:
Customers connecting to application workloads running on OpenShift will take place over port 80 or 443. Looking at a public ROSA cluster, there is both an internal and internet facing Classic Load Balancer exposing these applications. Client connections from the internet will resolve to the public facing Classic Load Balancer, which will forward connections to the OpenShift routing layer running on the infrastructure nodes. Connections coming from within the same VPC, or via VPN, AWS Direct Connect, or transit gateway will come via the internal Classic Load Balancer that forwards connections to the OpenShift Routing layer running on the infrastructure nodes.

Administrative or SRE connection flow:
Developers, administrators, and SRE teams follow a different path. These connections will make use of Port 6443 and connect to a Network Load Balancer, which connects to the OpenShift API or OpenShift web console. This could be for users and SRE members to access the OpenShift web console and provide a graphical means of operational administration. This could also be used by DevOps solutions such as automated pipeline, build, and deploy processes to deploy application workloads onto the OpenShift cluster. These connections if coming from within the VPC, via AWS Direct Connect, Peering, VPN, or transit gateway would hit the internal Network Load Balancer and be forwarded to the API endpoint on one of the OpenShift Master nodes. Connections coming from the internet would hit the internet facing network load balancer and then forwarded to one of the Master nodes.

ROSA is still OpenShift so the OpenShift CLI “oc” is still used for much of the above administration and automation functions. The OpenShift CLI “oc” is an extension of the Kubernetes kubectl and included OpenShift specific abstractions such as Routes, Projects, etc.

Load balancers for AWS services:

Customers wanting to expose the workloads running on OpenShift to other AWS accounts and VPCs within their Organization via AWS PrivateLink will need to replace the Classic Load Balancer with a Network Load Balancer. This is not supported via the ROSA CLI at this stage and will require the manual creation of the NLB and changes to the OpenShift cluster egress.

This does not hinder admin or SRE access to the cluster for administration; this change does not hinder delete of ROSA clusters via the ROSA CLI.

Similarly, customers looking to make use of AWS Web Application Firewall as a security solution in combination with OpenShift application workloads will need to implement an additional Application Load Balancer in front of the Classic Load Balancer as a target for AWS WAF.

Shared VPC

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html
At this stage, ROSA does not support deployment into a shared VPC at this stage. ROSA and the OpenShift OCP installer will provide support for Shared VPCs in the future.

Multi Region

The ROSA provisioning process, like most AWS products and services, caters for multi-AZ support within a single AWS Region. Customer seeking multi region availability will need to deploy separate clusters in each region. CI/CD pipelines and automation will need to be updated to deploy to the respective clusters. DNS name resolution will be used to resolve application URLs to the respective clusters and control failover. It is recommended that Amazon Route 53 form part of this design.

Conclusion:

This should provide you with better insight into the OpenShift and AWS resources creates and how they relate to each other as well as the environments being deployed into. Allowing infrastructure and security teams to accelerate assessment and deployment of the Red Hat OpenShift Service on AWS.

Additional resources

TAGS: ,
Ryan Niksch

Ryan Niksch

Ryan Niksch is a Partner Solutions Architect focusing on application platforms, hybrid application solutions, and modernization. Ryan has worn many hats in his life and has a passion for tinkering and a desire to leave everything he touches a little better than when he found it.