Containers

Getting started with Consul service mesh on Amazon ECS

We recently announced the general availability of Amazon Elastic Container Service (Amazon ECS) service extension for Consul service mesh in AWS Cloud Development Kit (AWS CDK). This is a new integration that makes it easier for customers to use Consul as a service mesh on Amazon ECS. In this blog post, we show you how to integrate Consul service mesh on Amazon ECS using AWS CDK.

Introduction to Consul service mesh

Consul by HashiCorp is a multi-platform service mesh, which provides service discovery, secure communication, layer 7 traffic management, and real-time health monitoring. Customers have been using Consul as a service mesh in Amazon ECS for years. It was the customer’s responsibility to add Consul client and Envoy proxy as a sidecar on the ECS task definition. To secure Consul traffic with encryption, you need to configure and distribute the certificate and token to all Consul clients and servers. To ensure traffic is routed only to a healthy service, you configure Consul checks. To configure Consul Access Control Lists (ACL), you provision and distribute set of tokens. We hear that customers love the benefit of running Consul on Amazon ECS and wish to simplify the integration. HashiCorp and AWS are collaborating together to launch the Consul Service Mesh on Amazon ECS.

Consul Service Mesh on Amazon ECS provides integrated features such as secure deployment and health checks, along with Consul existing features. It supports both Amazon ECS on AWS Fargate and Amazon ECS on Amazon Elastic Compute Cloud (EC2) launch types.

To simplify the process to join the Amazon ECS service to the Consul service mesh, you can choose from two available deployment mechanisms. If you are familiar with HashiCorp Terraform, check the consul-ecs module in the Terraform registry. The mesh-task module can be adopted to match your existing Terraform template for ecs_task_definition. If you are developing with CDK, the Amazon ECS service extension for Consul can help you integrate Consul on Amazon ECS.

Amazon ECS service extension for Consul

The Amazon ECS service extension for Consul simplifies the steps in CDK to add Consul service mesh sidecars into your existing task definition. This extension natively supports ECS task health checks, so you don’t have to build additional health checks. The Consul client will gracefully start accepting traffic once the ECS task is healthy. Conversely, the Consul client handles graceful shutdown and stops receiving traffic during ECS task shutdown. The Envoy proxy is ready before the application container starts so that outgoing traffic can reach the upstream. To encrypt the network communication, you provide the TLS certificate public key and gossip encryption key, which are then passed via CDK construct to the Consul client.

To demonstrate the extension functionality, we will deploy a sample microservices application and join it to Consul service mesh. Before we start, make sure you have completed the following prerequisites. First, AWS CLI configured with the credentials of your target AWS account and Region. Second, install the Node.js version 10.13 or higher. Third, install the AWS CDK Toolkit from your terminal using command npm install -g aws-cdk.

After you have completed the prerequisites, let’s create a working directory, initialize the CDK project, and install the NPM packages. The ECS Consul Mesh extension is available from npm with the package name @aws-quickstart/ecs-consul-mesh-extension. Run the commands below from your terminal. Replace the placeholder $ACCOUNT and $REGION with your target AWS account ID and Region accordingly.

mkdir app && cd app
cdk init app --language=typescript
export ACCOUNT=<enter your AWS account id>
export REGION=<enter your AWS region>
cdk bootstrap aws://$ACCOUNT/$REGION
npm install @aws-cdk/core @aws-cdk/aws-ec2 @aws-cdk/aws-ecs @aws-cdk/aws-iam @aws-cdk/aws-secretsmanager @aws-cdk-containers/ecs-service-extensions @aws-quickstart/ecs-consul-mesh-extension
npm update

We are going to create stacks for the ECS environment, Consul server, and sample microservices application. We need to declare shared properties that are passed between the stacks, such as the VPC, security groups, ECS cluster, and AWS Secrets Manager secrets. The secrets are being used to store the Consul certificate authority (CA) public key and gossip key encryption. Create a new file called lib/shared-props.ts with the following content.

import * as cdk from '@aws-cdk/core';
import * as ec2 from '@aws-cdk/aws-ec2';
import * as extensions from '@aws-cdk-containers/ecs-service-extensions';
import * as secretsmanager from '@aws-cdk/aws-secretsmanager';

export interface EnvironmentInputProps extends cdk.StackProps {
  envName: string;
  allowedIpCidr: string;
}

export interface EnvironmentOutputProps extends cdk.StackProps {
  envName: string;
  vpc: ec2.Vpc;
  serverSecurityGroup: ec2.SecurityGroup;
  clientSecurityGroup: ec2.SecurityGroup;
  ecsEnvironment: extensions.Environment;
}

export interface ServerInputProps extends cdk.StackProps {
  envProps: EnvironmentOutputProps,
  keyName: string,
}

export interface ServerOutputProps extends cdk.StackProps {
  serverTag: {[key: string]: string};
  serverDataCenter: string;
  agentCASecret: secretsmanager.ISecret;
  gossipKeySecret: secretsmanager.ISecret;
}

Now, we are going to build a new VPC, ECS cluster and security groups for the Consul server and client. Each Consul client needs to communicate with each other via gossip protocol. To accommodate that, we add new security group rules for TCP and UDP port 8301. Copy the following code into a new file called /lib/environment.ts.

import * as cdk from '@aws-cdk/core';
import * as ec2 from '@aws-cdk/aws-ec2';
import * as ecs from '@aws-cdk/aws-ecs';
import * as extensions from '@aws-cdk-containers/ecs-service-extensions';
import { EnvironmentInputProps, EnvironmentOutputProps } from './shared-props';

export class Environment extends cdk.Stack {
  public readonly props: EnvironmentOutputProps;

  constructor(scope: cdk.Construct, id: string, inputProps: EnvironmentInputProps) {
    super(scope, id, inputProps);

    const vpc = new ec2.Vpc(this, 'ConsulVPC', {
      subnetConfiguration: [
        {
          cidrMask: 24,
          name: 'PublicSubnet',
          subnetType: ec2.SubnetType.PUBLIC,
        },
        {
          cidrMask: 24,
          name: 'PrivateSubnet',
          subnetType: ec2.SubnetType.PRIVATE_WITH_NAT,
        }]
    });

    const serverSecurityGroup = new ec2.SecurityGroup(this, 'ConsulServerSecurityGroup', {
      vpc,
      description: 'Access to the ECS hosts that run containers',
    });

    serverSecurityGroup.addIngressRule(
      ec2.Peer.ipv4(inputProps.allowedIpCidr),
      ec2.Port.tcp(22),
      'Allow incoming connections for SSH over IPv4');

    const clientSecurityGroup = new ec2.SecurityGroup(this, 'ConsulClientSecurityGroup', {
      vpc,
    });

    clientSecurityGroup.addIngressRule(
      clientSecurityGroup,
      ec2.Port.tcp(8301),
      'allow all the clients in the mesh talk to each other'
    );
    
    clientSecurityGroup.addIngressRule(
      clientSecurityGroup,
      ec2.Port.udp(8301),
      'allow all the clients in the mesh talk to each other'
    );

    const ecsCluster = new ecs.Cluster(this, "ConsulMicroservicesCluster", {
      vpc: vpc,
      containerInsights: true,
    });

    const ecsEnvironment = new extensions.Environment(scope, 'ConsulECSEnvironment', {
      vpc,
      cluster: ecsCluster,
    });

    this.props = {
      envName: inputProps.envName,
      vpc,
      serverSecurityGroup,
      clientSecurityGroup,
      ecsEnvironment,
    };
  }
}

Next, we are going to create a single Consul server in the same VPC as the ECS cluster. The Consul server is configured with TLS and gossip encryption enabled. Both the TLS CA public key and gossip encryption key are stored on the Secrets Manager secret. To configure this automatically, we are going to bootstrap this process during the server launch. Create a new file called lib/user-data.txt with the following content.

#Utillity
sudo yum install jq unzip wget docker -y
usermod -a -G docker ec2-user
sudo service docker start
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
wget https://releases.hashicorp.com/consul/1.10.4/consul_1.10.4_linux_amd64.zip
unzip consul_1.10.4_linux_amd64.zip

EC2_INSTANCE_IP_ADDRESS=$(curl -s 169.254.169.254/latest/meta-data/local-ipv4)
EC2_INSTANCE_ID=$(curl -s 169.254.169.254/latest/meta-data/instance-id)
AWS_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r '.region')

mkdir -p /opt/consul/data
mkdir -p /opt/consul/config

#Consul initial setup
cat << EOF > /opt/consul/config/consul-server.json
{
"advertise_addr": "${EC2_INSTANCE_IP_ADDRESS}",
"client_addr": "0.0.0.0",
"connect": {
"enabled": true
}
}
EOF

docker run -d --net=host -p 8300:8300 -p 8301:8301 -p 8301:8301/udp -p 8302:8302 \
-p 8302:8302/udp -p 8400:8400 -p 8500:8500 -p 53:53/udp \
-v /opt/consul/data:/consul/data -v /opt/consul/config:/consul/config \
-v /var/run/docker.sock:/var/run/docker.sock \
-h $EC2_INSTANCE_ID --name consul-server -e CONSUL_ALLOW_PRIVILEGED_PORTS=1 \
-l service_name=consul-server public.ecr.aws/hashicorp/consul:1.10.4 agent -server \
-bootstrap-expect 1 -ui -config-file /consul/config/consul-server.json

#Generate Consul CA
./consul tls ca create
aws secretsmanager update-secret --secret-id $CONSUL_CA_SECRET_ARN \
--secret-string file://consul-agent-ca.pem \
--region $AWS_REGION

#Generate Server certs
./consul tls cert create -server -dc dc1
sudo mkdir /opt/consul/certs
sudo cp consul-agent-ca.pem /opt/consul/certs
sudo cp dc1-server-consul-0-key.pem /opt/consul/certs
sudo cp dc1-server-consul-0.pem /opt/consul/certs
sudo tee /opt/consul/config/tls.json > /dev/null << EOF
{
"ports": {"https": 8501},
"verify_incoming_rpc": true,
"verify_outgoing": true,
"verify_server_hostname": true,
"ca_file": "/consul/certs/consul-agent-ca.pem",
"cert_file": "/consul/certs/dc1-server-consul-0.pem",
"key_file": "/consul/certs/dc1-server-consul-0-key.pem",
"auto_encrypt": { "allow_tls": true }
}
EOF

#Generate gossip
./consul keygen > consul-agent-gossip.txt
aws secretsmanager update-secret --secret-id $CONSUL_GOSSIP_SECRET_ARN \
--secret-string file://consul-agent-gossip.txt \
--region $AWS_REGION

GOSSIP_SECRET=$(cat consul-agent-gossip.txt)
sudo tee /opt/consul/config/consul-server.json > /dev/null << EOF
{
"advertise_addr": "$EC2_INSTANCE_IP_ADDRESS",
"client_addr": "0.0.0.0",
"connect": {
"enabled": true
},
"encrypt": "$GOSSIP_SECRET"
}
EOF

#Restart Consul
docker stop consul-server
docker rm consul-server
EC2_INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
docker run -d --net=host -p 8300:8300 -p 8301:8301 -p 8301:8301/udp -p 8302:8302 \
-p 8302:8302/udp -p 8400:8400 -p 8500:8500 -p 53:53/udp \
-v /opt/consul/data:/consul/data \
-v /opt/consul/config:/consul/config \
-v /opt/consul/certs:/consul/certs \
-v /var/run/docker.sock:/var/run/docker.sock \
-h $EC2_INSTANCE_ID --name consul-server -e CONSUL_ALLOW_PRIVILEGED_PORTS=1 \
-l service_name=consul-server public.ecr.aws/hashicorp/consul:1.10.4 agent -server \
-bootstrap-expect 1 -ui -config-file /consul/config/consul-server.json

Create a new file called lib/consul-server.ts to define the Consul server configuration. On this example the Consul server is set with data center default value of dc1. The retry join configuration is set with the Consul server name tag value, declared by tagValue. The Consul agent will use these references to join the Consul service mesh. Copy the sample code below to file lib/consul-server.ts.

import * as fs from 'fs';
import * as cdk from '@aws-cdk/core';
import * as ec2 from '@aws-cdk/aws-ec2';
import * as iam from '@aws-cdk/aws-iam';
import * as secretsmanager from '@aws-cdk/aws-secretsmanager';
import { ServerInputProps, ServerOutputProps } from './shared-props';

export class ConsulServer extends cdk.Stack {
  public readonly props: ServerOutputProps;

  constructor(scope: cdk.Construct, id: string, inputProps: ServerInputProps) {
    super(scope, id, inputProps);

    const ami = new ec2.AmazonLinuxImage({
      generation: ec2.AmazonLinuxGeneration.AMAZON_LINUX_2,
    });

    const agentCASecret = new secretsmanager.Secret(this, 'agentCASecret', {
      description: 'Consul TLS encryption CA public key'
    });

    const gossipKeySecret = new secretsmanager.Secret(this, 'gossipKeySecret', {
      description: 'Consul gossip encryption key'
    });

    // Role to allow Consul server to write to secrets manager
    const role = new iam.Role(this, 'ConsulSecretManagerRole', {
      assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com'),
    });
    role.addToPolicy(new iam.PolicyStatement({
      actions: ['secretsmanager:UpdateSecret'],
      resources: [agentCASecret.secretArn, gossipKeySecret.secretArn],
    }));

    const userData = ec2.UserData.forLinux();
    const userDataScript = fs.readFileSync('./lib/user-data.txt', 'utf8');
    const consulInstanceName = 'ConsulInstance';

    userData.addCommands('export CONSUL_CA_SECRET_ARN='+ agentCASecret.secretArn)
    userData.addCommands('export CONSUL_GOSSIP_SECRET_ARN='+ gossipKeySecret.secretArn)
    userData.addCommands(userDataScript);
    userData.addCommands(
    `# Notify CloudFormation that the instance is up and ready`,
    `yum install -y aws-cfn-bootstrap`,
    `/opt/aws/bin/cfn-signal -e $? --stack ${cdk.Stack.of(this).stackName} --resource ${consulInstanceName} --region ${cdk.Stack.of(this).region}`);

    const vpc = inputProps.envProps.vpc;

    // This setup is just for a test environment
    const consulServer = new ec2.Instance(this, consulInstanceName, {
      vpc: vpc,
      vpcSubnets: {subnetType: ec2.SubnetType.PUBLIC},
      securityGroup: inputProps.envProps.serverSecurityGroup,
      instanceType: ec2.InstanceType.of(ec2.InstanceClass.T3,ec2.InstanceSize.LARGE),
      machineImage: ami,
      keyName: inputProps.keyName,
      role: role,
      userData: userData,
      resourceSignalTimeout: cdk.Duration.minutes(5),
      blockDevices: [{
        deviceName: '/dev/xvda',
        volume: ec2.BlockDeviceVolume.ebs(10, {
          encrypted: true,
        }),
      }],
    });
    var cfnInstance = consulServer.node.defaultChild as ec2.CfnInstance
    cfnInstance.overrideLogicalId(consulInstanceName);

    const serverDataCenter = 'dc1';
    const tagName = 'Name'
    const tagValue = inputProps.envProps.envName + '-consul-server';
    cdk.Tags.of(consulServer).add(tagName, tagValue);
    const serverTag = { [tagName]: tagValue };

    new cdk.CfnOutput(this, 'ConsulSshTunnel', {
      value: `ssh -i "~/.ssh/`+ inputProps.keyName + `.pem" ` +
       `-L 127.0.0.1:8500:` + consulServer.instancePublicDnsName + `:8500 ` +
       `ec2-user@` + consulServer.instancePublicDnsName,
      description: 'Command to run to open a local SSH tunnel to view the Consul dashboard',
    });

    this.props = {
      serverTag,
      serverDataCenter,
      agentCASecret,
      gossipKeySecret
    };
  }
}

In this example, we set up three services to join the Consul service mesh. The greeter service acts as the front-end that sends the request to name and greeting services. Create a new file called lib/microservices.ts and copy the following code.

import * as path from 'path';
import * as cdk from '@aws-cdk/core';
import * as ec2 from '@aws-cdk/aws-ec2';
import * as ecs from '@aws-cdk/aws-ecs';
import * as consul_ecs from '@aws-quickstart/ecs-consul-mesh-extension';
import * as ecs_extensions from '@aws-cdk-containers/ecs-service-extensions';
import { EnvironmentOutputProps, ServerOutputProps } from './shared-props';

export class Microservices extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string,
    envProps:EnvironmentOutputProps, serverProps: ServerOutputProps, props?: cdk.StackProps) {
      super(scope, id, props);

      const consulServerSecurityGroup = ec2.SecurityGroup.fromSecurityGroupId(this, 'ImportedServerSG', envProps.serverSecurityGroup.securityGroupId);
      const consulClientSecurityGroup = ec2.SecurityGroup.fromSecurityGroupId(this, 'ImportedClientSG', envProps.clientSecurityGroup.securityGroupId);

      // Consul Client Base Configuration
      const retryJoin = new consul_ecs.RetryJoin({
        region: cdk.Stack.of(this).region,
        tagName: Object.keys(serverProps.serverTag)[0],
        tagValue: Object.values(serverProps.serverTag)[0]});
      const baseProps = {
        retryJoin,
        consulClientSecurityGroup: consulClientSecurityGroup,
        consulServerSecurityGroup: consulServerSecurityGroup,
        consulCACert: serverProps.agentCASecret,
        gossipEncryptKey: serverProps.gossipKeySecret,
        tls: true,
        consulDatacenter: serverProps.serverDataCenter,
      };

      // NAME service
      const nameDescription = new ecs_extensions.ServiceDescription();
      nameDescription.add(new ecs_extensions.Container({
        cpu: 512,
        memoryMiB: 1024,
        trafficPort: 3000,
        image: ecs.ContainerImage.fromAsset(path.resolve(__dirname, '../../../services/name/src/'), {file: 'Dockerfile'}),
      }));
      nameDescription.add(new consul_ecs.ECSConsulMeshExtension({
        ...baseProps,
        serviceDiscoveryName: 'name',
      }));      
      const name = new ecs_extensions.Service(this, 'name', {
        environment: envProps.ecsEnvironment,
        serviceDescription: nameDescription
      });

      // GREETING service
      const greetingDescription = new ecs_extensions.ServiceDescription();
      greetingDescription.add(new ecs_extensions.Container({
        cpu: 512,
        memoryMiB: 1024,
        trafficPort: 3000,
        image: ecs.ContainerImage.fromAsset(path.resolve(__dirname, '../../../services/greeting/src/'), {file: 'Dockerfile'}),
      }));
      greetingDescription.add(new consul_ecs.ECSConsulMeshExtension({
        ...baseProps,
        serviceDiscoveryName: 'greeting',
      }));      
      const greeting = new ecs_extensions.Service(this, 'greeting', {
        environment: envProps.ecsEnvironment,
        serviceDescription: greetingDescription,
      });

      // GREETER service
      const greeterDescription = new ecs_extensions.ServiceDescription();
      greeterDescription.add(new ecs_extensions.Container({
        cpu: 512,
        memoryMiB: 1024,
        trafficPort: 3000,
        image: ecs.ContainerImage.fromAsset(path.resolve(__dirname, '../../../services/greeter/src/'), {file: 'Dockerfile'}),
      }));
      greeterDescription.add(new consul_ecs.ECSConsulMeshExtension({
        ...baseProps,
        serviceDiscoveryName: 'greeter',
      }));      
      greeterDescription.add(new ecs_extensions.HttpLoadBalancerExtension());
      const greeter = new ecs_extensions.Service(this, 'greeter', {
        environment: envProps.ecsEnvironment,
        serviceDescription: greeterDescription,
      });

      // CONSUL CONNECT
      greeter.connectTo(name, { local_bind_port: 3001 });
      greeter.connectTo(greeting, { local_bind_port: 3002 });

      new cdk.CfnOutput(this, 'ConsulClientSG', {
        value: envProps.clientSecurityGroup.securityGroupId,
        description: 'Consul Client SG',
      });
  }
}

Let’s review the sample application code above. We create a new ECSConsulMeshExtension and pass several base parameters such as the Consul TLS certificate, Consul gossip encryption key, Consul data center, and the retry join settings. For each services, we also pass the serviceDiscoveryName parameter. The extension creates an environment variable for the application container to reference the upstream. The format of this environment variable is <SERVICENAME>_URL. For example, the greeter service connects to name service using environment variable NAME_URL. As the application owner, you refer the <SERVICENAME>_URL environment variable in your application code and let the extension populate the value automatically. You can find a sample of this reference in the greeter repository.

To allow the greeter service to connect to name and greeting services, we use the helper method called connectTo . This helper automatically configures Consul service mesh and the security group rules to allow both services to communicate. We also explicitly set the Envoy proxy upstream listener port for name and greeting services by using the parameter local_bind_port. Alternatively, you can skip this parameter and the extension will assign the port number incrementally, starting from port number 3001.

We almost ready to deploy the sample application. Let’s modify the application entry point. We will launch the environment stack, followed by the Consul server stack and the sample microservices stack. Open the file bin/app.ts and replace it’s content with the following code.

#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from '@aws-cdk/core';

const app = new cdk.App();

import { Environment } from '../lib/environment';

// Environment
var allowedIPCidr = process.env.ALLOWED_IP_CIDR || `$ALLOWED_IP_CIDR`;
const environment = new Environment(app, 'ConsulEnvironment', {
    envName: 'test',
    allowedIpCidr: allowedIPCidr,
});

import { ConsulServer } from '../lib/consul-server';

// Consul Server
var keyName = process.env.MY_KEY_NAME || `$MY_KEY_NAME`;
const server = new ConsulServer(app, 'ConsulServer', {
    envProps: environment.props,
    keyName,
});

import { Microservices } from '../lib/microservices';

// Microservices with Consul Client
const microservices = new Microservices(app, 'ConsulMicroservices', environment.props, server.props);

To deploy the sample application, you need to supply two parameters to the stack: your public IP and your EC2 key pair. Run the commands below on your terminal and change the value of $MY_KEY_NAME with your existing EC2 key-pair name.

export ALLOWED_IP_CIDR=$(curl -s ifconfig.me)/32
export MY_KEY_NAME=<CHANGE WITH YOUR EC2 KEY PAIR>
cdk deploy --all

After the stacks are deployed successfully, from your terminal, check for the CDK output called ConsulServer.ConsulSshTunnel. Copy the output value and run the command on a separate terminal to establish a reverse SSH tunnel to the Consul server. Now you can access the Consul UI from http://localhost:8500/ui . From your CDK output terminal, locate the ConsulMicroservices.greeterloadbalancerdnsoutput URL. Open the URL from your browser to test the application. You should see results similar to below.

Output of microservices app showing random greetings and random name

From the Amazon ECS console, you can find the new ECS cluster is running with three active services. Selecting any of these services, we can find the additional sidecar containers added inside the ECS task. Notice that the consul-ecs-mesh-init only runs during the startup to set up the initial configuration for the Consul client and the Envoy proxy.

ECS Task console output showing four containers running as part of the service

Navigating to the Consul UI, we find that the greeter service is now connected to the name and greeting services.

Consul service mesh GUI, showing greeter app connected to name and greeting app

Clean up

This concludes the sample walk-through. We showed you how to use Amazon ECS service extension for Consul to connect one service to another. To clean up the sample application that you deployed as part of this example, don’t forget to run

cdk destroy --all

Get started today

Consul Service Mesh on ECS is generally available in all regions where Amazon ECS is supported. Find out more details about the Amazon ECS service extension for Consul from the GitHub repository. If you are new to the CDK service extension, check out the Amazon ECS service extension blog post to learn more. Don’t forget to check the Terraform module for Consul on Amazon ECS from the blog post by HashiCorp. Feel free to open an issue or even a pull request on the repository if you have ideas that you’d like to see added to the ECS Consul CDK extension.

Welly Siauw

Welly Siauw

Welly Siauw is a Principal Partner Solution Architect at Amazon Web Services (AWS). He spends his day working with customers and partners, solving architectural challenges. He is passionate about service integration and orchestration, serverless and AI/ML. He authored several AWS blogs and actively speaking at AWS and industry events. Welly spends his free time tinkering with espresso machine and outdoor hiking

Parag Bhingre

Parag Bhingre

Parag Bhingre is a Software Development Engineer in ECS DevX at Amazon Web Services (AWS). He is passionate about investigating customer pain points and ensuring that they are resolved. Right from building design and implementation documents to delivering an end product are some of his core duties. He has addressed multiple core issues in open source community for Spinnaker and Github actions for AWS. Currently, he is highly focused on making CDK extensions easy to use for the popular implementations such as consul service mesh on ECS. Parag loves to travel and explore new cultures and places.