Kubewarden is a Kubernetes policy engine that uses policies written using WebAssembly.
The Kubewarden stack is made by the following components:
Kubewarden Custom Resources: these are Kubernetes Custom Resources that simplify the process of managing policies.
kubewarden-controller: this is a Kubernetes controller that reconciles Kubewarden's Custom Resources. This component creates parts of the Kubewarden stack and, most important of all, translates Kubewarden's concepts into native Kubernetes directives.
Kubewarden policies: these are WebAssembly modules that hold the validation or mutation logic. These are covered in depth inside of this chapter.
policy-server: this component receives the requests to be validated. It does that by executing Kubewarden's policies.
At the bottom of the stack, Kubewarden's integrates with Kubernetes using the
concept of Dynamic Admission Control.
In particular, Kubewarden operates as a Kubernetes Admission Webhook.
policy-server is the actual Webhook endpoint that is reached by Kubernetes
API server to validate relevant requests.
Kubernetes is made aware of the existence of Kubewarden's Webhook endpoints by
kubewarden-controller. This is done by registering either
MutatingWebhookConfiguration or a
This diagram shows the full architecture overview of a cluster running the Kubewarden stack:
The architecture diagram from above can be intimidating at first, this section explains it step by step.
On a fresh new cluster, the Kubewarden components defined are its Custom
Resource Definitions, the
kubewarden-controller Deployment and a
Custom Resource named
kubewarden-controller notices the default
PolicyServer resource and, as a result of that,
it creates a Deployment of the
As stated above, Kubewarden works as a Kubernetes Admission Webhook. Kubernetes
dictates that all the Webhook endpoints must be secured with TLS.
kubewarden-controller takes care of setting up this secure communication
by doing these steps:
- Generate a self-signed Certificate Authority
- Use this CA to generate a TLS certificate and a TLS key for the
All these objects are stored into Kubernetes as Secret resources.
kubewarden-controller will create the
Deployment and a Kubernetes ClusterIP Service to expose it inside of
the cluster network.
This chart shows what happens when the first policy bounded to the default
policy-server is defined inside of the
kubewarden-controller notices the new
ClusterAdmissionPolicy resource and,
as a result of that, it finds the bounded
PolicyServer and reconciles it.
ClusterAdmissionPolicy is created, modified or deleted a reconciliation loop for the
that owns the policy is triggered inside the
In this reconciliation loop, a ConfigMap with all the polices bounded to
PolicyServer is created. Then the a Deployment rollout of the
policy-server is started. As a result of that, the new
instance will be started with the updated configuration.
At start time,
policy-server reads its configuration and downloads
all the Kubewarden policies. Policies can be downloaded from remote
endpoints like HTTP(s) servers and container registries.
Policies' behaviour can be tuned by users via policy-specific configuration
parameters. Once all the policies are downloaded,
policy-server will ensure
the policy settings provided by the user are valid.
policy-server performs the validation of policies's settings by
validate_setting function exposed by each policy.
This topic is covered more in depth inside
of this section of the documentation.
policy-server will exit with an error if one or more policies received wrong
configuration parameters from the end user.
If all the policies are properly configured,
policy-server will spawn a
pool of worker threads to evaluate incoming requests using the Kubewarden
policies specified by the user.
policy-server will start a HTTPS server that listens to incoming
validation requests. The web server is secured using the TLS key and certificate
that have been previously created by
Each policy is exposed by the web server via a dedicated path that follows this
This is how the cluster looks like once the initialization of
policy-server Pods have a
kubewarden-controller relies on that to know when the
is ready to evaluate admission reviews.
policy-server Deployment is marked as
will make the Kubernetes API server aware of the new policy by creating either a
MutatingWebhookConfiguration or a
Each policy has its dedicated
which points to the Webhook endpoint served by
policy-server. The endpoint
is reachable by the
/validate/<policy ID> URL mentioned before.
Now that all the plumbing has been done, Kubernetes will start sending the
relevant Admission Review requests to the right
policy-server receives the Admission Request object and, based on the
endpoint that received the request, uses the right policy to evaluate it.
Each policy is evaluated inside of its own dedicated WebAssembly sandbox.
The communication between
policy-server (the "host") and the WebAssembly
policy (the "guest") is done using the waPC communication protocol. This is
covered in depth inside of this
section of the documentation.
A cluster can have multiple policy servers and Kubewarden policies defined.
Benefits of having multiple policy servers:
- Noisy Namespaces/Tenants generating lots of policy evaluations can be isolated from the rest of the cluster and do not affect other users.
- Mission critical policies can be run inside of a Policy Server "pool", making your whole infrastructure more resilient.
policy-server is defined via its own
PolicyServer resource and each policy is defined via its own
This leads back to the initial diagram:
ClusterAdmissionPolicy is bounded to a
ClusterAdmissionPolicies that don't specify any
will be bounded to the
default. If a
ClusterAdmissionPolicy references a
PolicyServer that doesn't
exist, it will be in an
policy-server defines multiple validation endpoints, one per policy defined
inside of its configuration file. It's also possible to load the same policy
multiple times, just with different configuration parameters.
The Kubernetes API server is made aware of these policy via the
that are kept in sync by
Finally, the incoming admission requests are then dispatched by the Kubernetes
API server to the right validation endpoint exposed by