The Kubewarden stack is made of the following components:
- An arbitrary number of
ClusterAdmissionPolicyresources: this is how policies are defined inside of Kubernetes
- A Deployment of Kubewarden
policy-server: this component loads all the policies defined by the administrators and evaluates them
- A Deployment of
kubewarden-controller: this is the controller that monitors the
ClusterAdmissionPolicyresources and interacts with the Kubewarden
The Kubewarden stack can be deployed using a helm chart:
helm repo add kubewarden https://charts.kubewarden.io helm install --namespace kubewarden --create-namespace kubewarden-controller kubewarden/kubewarden-controller
This will install
kubewarden-controller on the Kubernetes cluster in
the default configuration and will register the
ClusterAdmissionPolicy Custom Resource. The components of the
Kubewarden stack will be deployed inside of a Kubernetes Namespace
The default configuration values should be good enough for the majority of deployments, all the options are documented here.
The Kubewarden Policy Server is completely managed by the kubewarden-controller.
Enforcing policies is by far the most common operation a Kubernetes administrator will perform. You can declare as many policies as you want, targeting any kind of Kubernetes resource and type of operation that can be done against them.
ClusterAdmissionPolicy resource is the core of the Kubewarden stack: this is
how validating policies are defined.
apiVersion: policies.kubewarden.io/v1alpha2 kind: ClusterAdmissionPolicy metadata: name: psp-capabilities spec: module: registry://ghcr.io/kubewarden/policies/psp-capabilities:v0.1.3 rules: - apiGroups: [""] apiVersions: ["v1"] resources: ["pods"] operations: - CREATE - UPDATE mutating: true settings: allowed_capabilities: - CHOWN required_drop_capabilities: - NET_ADMIN
This is a quick overview of the attributes of the
module: this is the location of the Kubewarden policy, several schemas are supported.
registry: download from an OCI artifacts compliant container registry
https: download from a regular HTTP(s) server
file: load the module from the local filesystem
resources: types of resources evaluated by the policy
operations: what operations for the previously given types should be forwarded to this admission policy by the API server for evaluation.
mutating: a boolean value that must be set to
truefor policies that can mutate incoming requests.
settings(optional): a free-form object that contains the policy configuration values.
failurePolicy(optional): how unrecognized errors and timeout errors from the policy are handled. Allowed values are
Ignoremeans that an error calling the webhook is ignored and the API request is allowed to continue.
Failmeans that an error calling the webhook causes the admission to fail and the API request to be rejected. The default behaviour is
NOTE: ClusterAdmissionPolicy resources are registered with a
scope, which means that registered webhooks will be forwarded all requests matching the given
operations-- either namespaced (in any namespace), or cluster-wide resources.
ClusterAdmissionPolicyresource is cluster-wide. There are plans to also provide a namespaced version that will only impact registered namespaced resources on its own namespace.
We will use the
This policy prevents the creation of privileged containers inside of a Kubernetes cluster.
Let's define a
ClusterAdmissionPolicy for that:
kubectl apply -f - <<EOF apiVersion: policies.kubewarden.io/v1alpha2 kind: ClusterAdmissionPolicy metadata: name: privileged-pods spec: module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.5 rules: - apiGroups: [""] apiVersions: ["v1"] resources: ["pods"] operations: - CREATE - UPDATE mutating: false EOF
This will produce the following output:
ClusterAdmissionPolicy will lead to a rollout of the Kubewarden Policy
Server Deployment. Once the new policy is ready to be served, the
will register a ValidatingWebhookConfiguration
Once all the instances of
policy-server are ready, the
ValidatingWebhookConfiguration can be shown with:
kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io -l kubewarden
Which will output something like
NAME WEBHOOKS AGE privileged-pods 1 9s
Let's try to create a Pod with no privileged containers:
kubectl apply -f - <<EOF apiVersion: v1 kind: Pod metadata: name: unprivileged-pod spec: containers: - name: nginx image: nginx:latest EOF
This will produce the following output, which means the Pod was successfully created:
Now, let's try to create a pod with at least one privileged container:
kubectl apply -f - <<EOF apiVersion: v1 kind: Pod metadata: name: privileged-pod spec: containers: - name: nginx image: nginx:latest securityContext: privileged: true EOF
This time the creation of the Pod will be blocked, with the following message:
Error from server: error when creating "STDIN": admission webhook "privileged-pods.kubewarden.admission" denied the request: User 'minikube-user' cannot schedule privileged containers
As a first step remove all the
ClusterAdmissionPolicy resources you have created.
This can be done with the following command:
kubectl delete --all clusteradmissionpolicies.policies.kubewarden.io
Then wait for the for the
kubewarden-controller to remove all the
ValidatingWebhookConfiguration and the
resources it created.
This can be monitored with the following command:
kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io -l "kubewarden" && \ kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io -l "kubewarden"
If these resources are not automatically removed, you can do remove them manually by using the following command:
kubectl delete -l "kubewarden" validatingwebhookconfigurations.admissionregistration.k8s.io && \ kubectl delete -l "kubewarden" mutatingwebhookconfigurations.admissionregistration.k8s.io
Finally you can uninstall the Helm chart:
helm uninstall --namespace kubewarden kubewarden-controller
Once this is done you can remove the Kubernetes namespace that was used to deploy the Kubewarden stack:
kubectl delete namespace kubewarden
This will delete all the resources that were created at runtime by the
Note well: it's extremely important to remove the
MutatingWebhookConfigurationresources before the
policy-serverDeployment. Otherwise the Kubernetes API server will continuously face timeout errors while trying to evaluate the incoming requests.
By default the
MutatingWebhookConfigurationresources created by Kubewarden have
Fail, which will cause all these incoming requests to be rejected.
This could bring havoc on your cluster.
As we have seen, the
ClusterAdmissionPolicy resource is the core type that
a cluster operator has to manage, the rest of the resources needed to
run the policies and configure them will be taken care of
automatically by the