Talk to any IT pros for more than a few minutes, and the conversation will likely turn to container management and orchestration tools like kubernetes. It’s one of the fastest-growing open source projects in history, and it eases deployment, monitoring and managing containerized applications at scale—and in multiple clouds or on bare metal.

Pods are the basic unit of scalability in a kubernetes cluster. They are groups of containers that share the same compute resources and a common network, so they can easily be rolled up or down to meet load requirements. Pods are also the basis for application fault tolerance, which helps ensure that pieces of an application remain up even when individual containers fail or slow down.

Kubernetes supports several different abstractions of workloads that are at a higher level than simple pods, giving IT teams more flexibility to automate and manage them in a declarative manner. The simplest is the Deployment controller, which can roll out new versions of an application to pods and update existing ones. For more complex applications that have state, the StatefulSet controller provides ways to configure and roll back pods as needed.

Kubernetes can be deployed anywhere, from public clouds to on-premises infrastructure, and it’s compatible with any operating system (OS) distribution. It’s easy to integrate into existing architecture, and it can move workloads between different data centers or cloud platforms without requiring redesigning applications. It also helps IT teams optimize hardware resource usage, including network bandwidth, memory and storage I/O, by limiting usage with quotas.