Containers

Announcing general availability of cdk8s+ and support for manifest validation

This post was co-written by Shimon Tolts, CEO and Co-Founder, Datree.

Introduction

On July of 2020, we introduced the beta version of cdk8s+, a high-level intent driven application programming interface (API) designed to simplify Kubernetes resource configuration. Since its release, we’ve been working on adding capabilities and collecting feedback from our customers.

Today, we’re happy to announce that cdk8s+ is generally available, with its first major version. This means it’s stable for use, with no breaking changes forthcoming until at least the next major version. This, along with the release we made late last year, marks the entire cdk8s toolchain (a Cloud Native Computing Foundation [CNCF] project) as generally available and stable.

  • To get started with cdk8s, visit our website for more details and getting started guides.

Solution Overview

These days, we see more and more companies empower their application developers to own the entire surface area of applications in the cloud. Among other responsibilities, this includes authoring Kubernetes manifests. This practice helps them remove bottlenecks and move faster at higher scale. However, authoring production-grade Kubernetes manifests can be tricky, and requires experience that application developers don’t usually have. Similar to other technologies, when it’s target audience increases then the complexity needs to decrease. In software, we accomplish this by creating different forms of abstractions. At its core, cdk8s+ is an API abstraction designed to reduce the complexity and improve maintainability of Kubernetes manifests. It offers constructs that expose a simplified API to the core Kubernetes resources.

Walkthrough

To give you a better sense, let’s have a look at some key capabilities that were added to cdk8s+ recently, and see what it takes to configure complex Kubernetes applications with them.

Isolate pod network to allow only specific communication

By default, Kubernetes pods are open for all ingress and egress communication. While this may be convenient during development, production deployments need a more secure setup, which restricts network communication to allow only the necessary minimum. In Kubernetes, this is done by configuring multiple network policies. In cdk8s+, the following shows how this would look like:

const web = new kplus.Deployment(this, 'Web', {
  containers: [{ image: 'web:latest' }],
  isolate: true,
});

const cache = new kplus.Deployment(this, 'Cache', {
  containers: [{ image: 'cache:latest' }],
  isolate: true,
});

web.connections.allowTo(cache);

We create two isolated pods (i.e., ones that don’t have network access at all) and then explicitly allow communication between them. This prevents the Web pod from accessing any pod other than cache, and prevents any pod other than Web to connect to the cache.

  • In contrast, here is the pure YAML manifest required to achieve this.

If an additional pod requires access to the cache, we can simply allow that connection as well.

For example:

// a batch job running computations that should 
// populate the cache once an hour.
const batch = new kplus.CronJob(this, 'Batch', {
  containers: [{ image: 'web:latest' }],
  schedule: Cron.hourly(),
});

cache.connections.allowFrom(batch);

This demonstrates why, at the beginning of this post, we defined cdk8s+ as an Intent driven. Its API is designed to capture the author’s intent, and implement that intent using whatever underlying mechanics are necessary. The intent here is restricting network communication, the mechanics are the network policies.

Co-locate pods to run on the same node

In some cases, we want different parts of our application to be deployed on the same host. For example, for latency reduction from a caching service, we’d like the reader to be deployed as close as possible to the cache.

In Kubernetes, we do this by configuring affinity rules on the relevant pods. In cdk8s+, here is how this would look like:

const web = new kplus.Deployment(this, 'Web', {
  containers: [{ image: 'web:latest' }],
  isolate: true,
});

const cache = new kplus.Deployment(this, 'Cache', {
  containers: [{ image: 'cache:latest' }],
  isolate: true,
});

web.scheduling.colocate(cache);

Again, we see that cdk8s+ speaks in terms on intent. In this case, the intent is to co-locate pods, while the mechanics are affinity rules.

  • In contrast, here is the pure YAML manifest required to achieve this.

In addition to a simplified API, cdk8s+ also offers some Kubernetes defaults that help secure your manifests and protect it from misconfigurations. For example, cdk8s+ will, by default, set the readOnlyRootFilesystem property on containers to true. This is considered a best practice, especially for production workloads. However, human errors can happen, and there’s no way to fully avoid them at authoring time. This is where policy enforcement tools come in.

Combating misconfigurations

Kubernetes is an amazing technology because it gives developers the means to control the infrastructure that runs their code. cdk8s takes this one step further. It gives developers the means to control the infrastructure in a manner most familiar to them, with code.

Yet, giving developers so much responsibility comes with the risk of misconfigurations. Unlike professional site reliability engineers (SREs), developers are not familiar with the bits and bytes of Kubernetes, and so they are prone to make mistakes. These mistakes often include deploying workloads without liveness or readiness probes or giving containers root access capabilities, which threaten the stability and security of their Kubernetes clusters. These kinds of misconfigurations are very common in Kubernetes, specifically because of how open-ended the technology is. You can configure Kubernetes however you want, which means you can also misconfigure it. It’s no surprise that the 2022 State of Kubernetes Security Report, released by Red Hat, shows 53% of Kubernetes administrators experienced an incident due to misconfigurations.

This is why we’ve partnered with Datree to design and implement an extensible plugin mechanism that allows third-party policy enforcement tools to validate the manifests produced by cdk8s+. You can implement your own plugins for both external third-party tools, as well internal tools you may have in your organization.

  • You can find instructions on how to implement these plugins here.

Today we’re excited to announce one such implementation: the Datree plugin for cdk8s. Datree prevents misconfigurations in Kubernetes by enforcing a policy on your manifests. It integrates directly into the cluster or into your continuous integration processes, scanning every configuration change and blocking those that don’t comply with your policy. Once you add Datree to your cdk8s configuration, every time you synthesize your application using cdk8s synth, the generated manifests are automatically validated against Datree’s dedicated policy, which identifies misconfigurations that you may have missed.

Let’s see it in action!

First, to get started with the integration, you need to edit your cdk8s.yaml configuration like so:

language: typescript
app: node main.js
imports:
  - k8s
validations:
  - package: '@datreeio/datree-cdk8s'
    class: DatreeValidation
    version: 1.3.4

We’ll use the cdk8s+ code we’ve shown throughout this post (in full application form):

import { Construct } from 'constructs';
import { App, Chart, ChartProps } from 'cdk8s';
import * as kplus from 'cdk8s-plus-24';

export class MyChart extends Chart {
  constructor(scope: Construct, id: string, props: ChartProps = { }) {
    super(scope, id, props);

    const web = new kplus.Deployment(this, 'Web', {
      containers: [{ image: 'web:latest' }],
      isolate: true,
    });

    const cache = new kplus.Deployment(this, 'Cache', {
      containers: [{ image: 'cache:latest' }],
      isolate: true,
    });

    web.connections.allowTo(cache);
    web.scheduling.colocate(cache);

  }
}

const app = new App();
new MyChart(app, 'cdk8s-app');
app.synth();

Next step is to run cdk8s synth to see what happens.

❯ cdk8s synth                                                                                                                                                                                                                                                             [23:00:55]
Synthesizing application
  - dist/cdk8s-app.k8s.yaml
Performing validations
🌳 Datree validating dist/cdk8s-app.k8s.yaml with policy cdk8s
Validations finished

Validation Report (@datreeio/datree-cdk8s@1.3.4)
------------------------------------------------

(Summary)

╔═══════════╤════════════════════════╗
║ Status    │ failure                ║
╟───────────┼────────────────────────╢
║ Plugin    │ @datreeio/datree-cdk8s ║
╟───────────┼────────────────────────╢
║ Version   │ 1.3.4                  ║
╟───────────┼────────────────────────╢
║ Customize │ https://app.datree.io  ║
║ policy    │                        ║
╚═══════════╧════════════════════════╝


(Violations)

Ensure each container image has a pinned (tag) version (2 occurrences)

  Occurrences:

    - Construct Path: cdk8s-app/Web/Resource
    - Manifest Path: ./dist/cdk8s-app.k8s.yaml
    - Resource Name: cdk8s-app-web-c825557e
    - Locations:
      > spec/template/spec/containers/0/image (line: 31:18)

    - Construct Path: cdk8s-app/Cache/Resource
    - Manifest Path: ./dist/cdk8s-app.k8s.yaml
    - Resource Name: cdk8s-app-cache-c8fee821
    - Locations:
      > spec/template/spec/containers/0/image (line: 112:18)

  Recommendation: Incorrect value for key `image` - specify an image version to avoid unpleasant "version surprises" in the future
  How to fix: https://hub.datree.io/ensure-image-pinned-version

Validation failed. See above reports for details

We can see that the Datree plugin validated our manifest and detected a misconfiguration: we didn’t specify the version of the web and cache images we used, but rather used the latest tag. This is considered bad practice because it creates non-deterministic deployments, and every time those images are pulled, their version is different that could break your code or have other undesired and untested implications. The report tells us exactly which construct exhibits the violation, as well as its exact location in the generated YAML manifest. There’s also a recommendation on how to fix it, with more detailed information on the violation itself.

Conclusion

We’ve seen a few examples of how cdk8s+ can simplify the authoring of Kubernetes manifests, making them more approachable to a wider audience. There are many more examples to choose from, with many more to come. Today, all of examples are stable and ready for production usage.

We then saw how cdk8s+ can integrate with third-party policy enforcement tools such as Datree, to provide a guardrail that protects your manifests against misconfigurations, and prevent them from reaching your cluster.

Finally, this content is fully open source and we welcome your feedback on our GitHub repo. We also invite you to join the discussion on our Slack channel and on Twitter (#cdk8s#cdk8s+).

Happy authoring!

Headshot for Shimon Tolts

Shimon Tolts, CEO and Co-Founder, Datree

Shimon is the CEO and Co-Founder at Datree, as well as an AWS Community Hero and CNCF Chapter Organizer. Shimon installed his first Linux sever when he was 12 and is passionate about everything DevOps.

Eli Polonsky

Eli Polonsky

Eli is a Senior Software Development Engineer leading the CDK For Kubernetes (cdk8s) project. He is passionate about identifying areas of complexity and coming up with ways to simplify them, especially in the developer tooling domain. Eli lives in Tel-Aviv, Israel and in his spare time enjoys learning history and messing around with his dog, git.