Monitor Kubernetes infrastructure and applications. Scale Kubernetes workloads based on metrics in Wavefront.

Monitor your Kubernetes environment at the infrastructure level and at the applications level with Wavefront Kubernetes Collector.

  • Monitor Kubernetes infrastructure metrics (containers, pods, etc.) from Wavefront dashboards – and create alerts from those dashboards.
  • Automatically collect metrics from applications and workloads using built-in plug-ins such as Prometheus, Telegraf, etc.

Scale your Kubernetes environment based on any metrics that are available in Wavefront with the Wavefront Horizontal Pod Autoscaler Adapter.

In the following video, Pierre Tessier explains how this works.

monitor and scale kubernetes

Kubernetes and Wavefront: Overview

You can use our open-source collector or the in-product integration:

  • The Wavefront Kubernetes Collector is available on GitHub. The collector is highly customizable and includes docs on GitHub and examples for different use cases.

  • The Wavefront Kubernetes integration is available in your Wavefront instance. You can preview the setup steps here. The integration is a great way to get data flowing and includes predefined dashboards for commonly used metrics. For further customization, you can use the files in the GitHub repository.

The Wavefront Kubernetes Collector supports autodiscovery of pods and services based on annotations and configuration rules. The collector runs as a DaemonSet for high scalability and supports leader election for monitoring cluster-level resources.

  • Kubernetes infrastructure monitoring: Monitor performance of the Kubernetes cluster and the state of the objects (pods, containers, etc) within the cluster using Wavefront Kubernetes Collector.
  • Host-level monitoring Below the Kubernetes infrastructure is the host or VM layer. The Wavefront Kubernetes Collector monitors that layer as well.
  • Application monitoring: The collector integrates with Telegraf, and automatically sets up monitoring for several popular applications. We also get metrics from Prometheus endpoints. You can customize the Wavefront Kubernetes Collector with a configuration file. For example, you can set collection frequency.

To set up scaling based on any metrics available in Wavefront, use the Wavefront Horizontal Autoscaler Adapter. If your environment needs more (or fewer) resources, Wavefront can tell the Kubernetes Autoscaler to adjust the environment.

Kubernetes Monitoring with Wavefront

The Wavefront Kubernetes Collector monitors your Kubernetes infrastructure at all levels of the stack. You can set up the integration to have much of the monitoring happen automatically. After you’ve set up the integration you can fine-tune and customize the solution with configuration options available in the Wavefront Kubernetes Collector.

Kubernetes Infrastructure Monitoring

The Wavefront Kubernetes Collector collects metrics to give comprehensive insight into all layers of your Kubernetes environment such as nodes, pods, services, and config maps.

Depending on the selected setup, metrics are sent to a Wavefront proxy and from there to the Wavefront service. It’s possible to send metrics using direct ingestion, but the Wavefront proxy is preferred for most cases.

kubernetes core monitoring

The collector runs as a DaemonSet for high scalability and supports leader election for monitoring cluster-level resources.

Host-Level Monitoring

The Wavefront Kubernetes Collector supports automatic monitoring of host-level metrics and host-level systemd metrics. When you set up the collector, it auto-discovers pods and services in your environment and starts collecting host-level metrics.

You can filter the metrics before they are reported to Wavefront.

Application Monitoring

The Wavefront Kubernetes Collector automatically starts collecting metrics from many commonly used applciations. It also scrapes Prometheus metric endpoints such as API server, etcd and NGINX. The following diagram illustrates this.

kubernetes application monitoring

Kubernetes Scaling with Wavefront

The default Kubernetes infrastructure can include a Horizontal Pod Autoscaler, which can automatically scale the number of pods. The Horizontal Pod Autoscaler gets CPU and memory information from the Kubernetes Metrics Server by default, and the Horizontal Pod Autoscaler uses that information.

The Wavefront Horizontal Pod Autoscaler Adapter allows you to scale based on any metric that it knows about.

For example, you could scale based on networking or disk metrics, or any application metrics that are available to Wavefront. The Autoscaler Adapter sends the recommendation to the Horizontal Pod Autoscaler, and the Kubernetes environment is kept healthy as a result.

kubernetes scaling

Wavefront Github Repositories for Kubernetes

We support the following open-source Github repositories:

  • wavefront-kubernetes-collector Second-generation Kubernetes monitoring. Supports auto-discovery, scaling using DaemonSet, filtering, and more.
  • wavefront-kubernetes-adapter Wavefront Kubernetes HPA (Horizontal Pod Autoscaler) adapter that implements the custom metrics (custom.metrics.k8s.io/v1beta1) and external metrics (external.metrics.k8s.io/v1beta1) APIs. The adapter can be used with the autoscaling/v2 HPA in Kubernetes 1.9+ to perform scaling based on any metrics available in Wavefront.
  • wavefront-kubernetes First-generation Kubernetes monitoring. Contains definitions and templates for monitoring Kubernetes using Wavefront. Supports only sending data to the Wavefront proxy (no direct ingestion support).
  • prometheus-storage-adapter – Usually used with our first-generation Kubernetes monitoring solution. The adapter forwards data it receives to a Wavefront proxy. It is useful when you want some control on how data collected by Prometheus are made available in Wavefront. Our second-generation solution, the Wavefront Kubernetes Collector, automatically collects Prometheus metrics.