🤘 Justin Garrison’s Post

View profile for 🤘 Justin Garrison

Futurist historian

If you want to save money with your Kubernetes cluster you need to enable consolidation!! #kubernetes #aws #eks #karpenter

Mattias Ahnberg

CIO at Netnod (ex-Spotify)

1y

What is the cool CLI tool that shows the node utilization?

Sarav Thangaraj

Head of Engineering - Site Reliability and DevOps at Signeasy | Platform Engineering | System Design | DataEngineering | Solution Architect

1y

Thanks 🤘 Justin Garrison I have been using Kaprpenter for a while and have not seen this property let me read about it.

Diego Romero

Blockchain & Backend Engineer

1y

You had +300 pods in the first version and <200 in the second one 🤔

Sean Martin

Senior Director, Kubernetes Platform Lead at Travelers

1y

Any chance Todd is open sourcing his demo tool? It’s pretty powerful!

Mateo Kruk

Engineer at Aleph (YC S21)

1y

You still have +100 pods pending 👀

ILIASS BENDIDIA

Sr. AWS Architect | DevSecOps

1y

Indeed the karpenter is a perfect tool for those use cases, but I think you missed up, there's almost 100 pods that still pending !! 😅

Like
Reply
Mitch Hulscher

DevOps Engineer/Architect (contract)

1y

In my opinion, inefficient bin packing on nodes is not what incurs the most cost. It's applications being over provisioned by requesting too much cpu and memory. Both karpenter and the CA can "consolidate" pods on nodes. Not trying to dunk on karpenter here, but this is not an unsolved problem. Implementing multi dimensional scaling that includes the VPA and the use of spot or reserved instances is what's going to save the most money.

Skander Belli

Optimising Kubernetes Costs using AI - TAM EMEA @CAST AI

1y

Neither VPA nor HPA can ensure performance and save money. They are both threshold based and they have no insight into node capacity and usage. You need a tool like Turbonomic to give each pod exactly the resources it needs and to move pods to the most optimal node. Feel free to reach out to me if you need more details. https://www.ibm.com/products/turbonomic/kubernetes

Honestly i don't get it, why your cluster is spinning so many nodes with such a low load? If you use cluster autoscaler for EKS it should handle this. Also you collapse 49 nodes to 3 ?! Like if there were a reason to keep many nodes alive in case of HA requirements this might turn into hell pretty fast if you will pack everything in 3 nodes 😂

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics