Prometheus-based Kubernetes Resource Recommendations

Get recommendations based on your existing data in Prometheus/Coralogix/Thanos/Mimir and more!
About The Project
Robusta KRR (Kubernetes Resource Recommender) is a CLI tool for optimizing resource allocation in Kubernetes clusters. It gathers pod usage data from Prometheus and recommends requests and limits for CPU and memory. This reduces costs and improves performance.
Data Integrations
View Instructions for: Prometheus, Thanos, Victoria Metrics, Google Managed Prometheus, Amazon Managed Prometheus, Azure Managed Prometheus, Coralogix,Grafana Cloud and Grafana Mimir
Reporting Integrations
View instructions for: Seeing recommendations in a UI, Sending recommendations to Slack, Setting up KRR as a k9s plugin
Features
No Agent Required: Run a CLI tool on your local machine for immediate results. (Or run in-cluster for weekly Slack reports.)
Prometheus Integration: Get recommendations based on the data you already have
Explainability: Understand how recommendations were calculated with explanation graphs
Extensible Strategies: Easily create and use your own strategies for calculating resource recommendations.
Free SaaS Platform: See why KRR recommends what it does, by using the free Robusta SaaS platform.
Future Support: Upcoming versions will support custom resources (e.g. GPUs) and custom metrics.
How Much Can I Expect to Save with KRR?
According to a recent Sysdig study, on average, Kubernetes clusters have:
69% unused CPU
18% unused memory
By right-sizing your containers with KRR, you can save an average of 69% on cloud costs.
Read more about how KRR works
Difference with Kubernetes VPA
| Feature π οΈ | Robusta KRR π | Kubernetes VPA π |
| Resource Recommendations π‘ | β CPU/Memory requests and limits | β CPU/Memory requests and limits |
| Installation Location π | β Not required to be installed inside the cluster, can be used on your own device, connected to a cluster | β Must be installed inside the cluster |
| Workload Configuration π§ | β No need to configure a VPA object for each workload | β Requires VPA object configuration for each workload |
| Immediate Results β‘ | β Gets results immediately (given Prometheus is running) | β Requires time to gather data and provide recommendations |
| Reporting π | β Json, CSV, Markdown, Web UI, and more! | β Not supported |
| Extensibility π§ | β Add your own strategies with few lines of Python | β οΈ Limited extensibility |
| Explainability π | β See graphs explaining the recommendations | β Not supported |
| Custom Metrics π | π Support in future versions | β Not supported |
| Custom Resources ποΈ | π Support in future versions (e.g., GPU) | β Not supported |
| Autoscaling π | π Support in future versions | β Automatic application of recommendations |
| Default History π | 14 days | 8 days |
| Supports HPA π₯ | β
Enable using --allow-hpa flag | β Not supported |
Installation
Requirements
KRR requires Prometheus 2.26+, kube-state-metrics & cAdvisor.
Installation Methods
Brew (Mac/Linux)
Additional Options
Environment-Specific Instructions
Setup KRR for...
Trusting custom Certificate Authority (CA) certificate:
If your llm provider url uses a certificate from a custom CA, in order to trust it, base-64 encode the certificate, and store it in an environment variable named CERTIFICATE
Free KRR UI on Robusta SaaS
We highly recommend using the free Robusta SaaS platform. You can:
Understand individual app recommendations with app usage history
Sort and filter recommendations by namespace, priority, and more
Give devs a YAML snippet to fix the problems KRR finds
Analyze impact using KRR scan history
Usage
Basic usage
Tweak the recommendation algorithm (strategy)
Giving an Explicit Prometheus URL
Override the kubectl context
Prometheus Authentication
Debug mode
How KRR works
Metrics Gathering
Robusta KRR uses the following Prometheus queries to gather usage data:
CPU Usage:
sum(irate(container_cpu_usage_seconds_total{{namespace="{object.namespace}", pod="{pod}", container="{object.container}"}}[{step}]))Memory Usage:
sum(container_memory_working_set_bytes{job="kubelet", metrics_path="/metrics/cadvisor", image!="", namespace="{object.namespace}", pod="{pod}", container="{object.container}"})
Need to customize the metrics? Tell us and we'll add support.
Get a free breakdown of KRR recommendations in the Robusta SaaS.
Algorithm
By default, we use a simple strategy to calculate resource recommendations. It is calculated as follows (The exact numbers can be customized in CLI arguments):
For CPU, we set a request at the 95th percentile with no limit. Meaning, in 95% of the cases, your CPU request will be sufficient. For the remaining 5%, we set no limit. This means your pod can burst and use any CPU available on the node - e.g. CPU that other pods requested but arenβt using right now.
For memory, we take the maximum value over the past week and add a 15% buffer.
Prometheus connection
Find about how KRR tries to find the default Prometheus to connect here.
Data Source Integrations
Scanning with a Centralized Prometheus
Integrations
Creating a Custom Strategy/Formatter
Look into the examples directory for examples on how to create a custom strategy/formatter.
Testing
We use pytest to run tests.
Install the project manually (see above)
Navigate to the project root directory
Install poetry (https://python-poetry.org/docs/#installing-with-the-official-installer)
Install dev dependencies:
poetry install --group dev
- Install robusta_krr as editable dependency:
pip install -e .
- Run the tests:
poetry run pytest
Contributing
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
Fork the Project
Create your Feature Branch (
git checkout -b feature/AmazingFeature)Commit your Changes (
git commit -m 'Add some AmazingFeature')Push to the Branch (
git push origin feature/AmazingFeature)Open a Pull Request
