WARNING: Kubewarden is in early development stage, it's not production ready.

Feedback is highly appreciated.

Introduction

Kubewarden is a Kubernetes Dynamic Admission Controller that validates incoming requests using policies written in WebAssembly.

What is WebAssembly?

As stated on WebAssembly's official website:

WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications.

WebAssembly has been originally conceived as an "extension" of browsers. However, recent efforts have been made by the WebAssembly community to allow the execution of WebAssembly code outside of browsers.

Why use WebAssembly?

By using WebAssembly, users can write Kubernetes policies using their favorite programming language, as long as the language can produce Wasm binaries.

Policy authors can reuse their skills, tools and best practices. Policies are "traditional" programs that can have reusable blocks (regular libraries), can be tested, can be linted, can be plugged into their current CI and CD workflows,...

Wasm modules are portable, once built they can run on any kind of processor architecture and Operating System. A policy built on a Apple Silicon machine can be run on a x86_64 Linux server without any conversion.

Policy distribution

Kubewarden Policies can be served by a regular web server or, even better, can be published inside of an OCI compliant registry.

Kubewarden Policies can be stored inside of an OCI compliant registry as OCI artifacts.

Quick Start

The Kubewarden stack is made of the following components:

  • An arbitrary number of ClusterAdmissionPolicy resources: this is how policies are defined inside Kubernetes
  • An arbitrary number of PolicyServer resources: this component represents a Deployment of a Kubewarden PolicyServer. The policies defined by the administrators are loaded and evaluated by the Kubewarden PolicyServer
  • A Deployment of kubewarden-controller: this is the controller that monitors the ClusterAdmissionPolicy resources and interacts with the Kubewarden PolicyServer components

Installation

PREREQUISITES:

Currently, the chart depends on cert-manager. Make sure you have cert-manager installed before installing the kubewarden-controller chart.

You can install the latest version of cert-manager by running the following commands:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml

kubectl wait --for=condition=Available deployment --timeout=2m -n cert-manager --all

The Kubewarden stack can be deployed using helm charts as follows:

helm repo add kubewarden https://charts.kubewarden.io

helm install --wait -n kubewarden --create-namespace kubewarden-crds kubewarden/kubewarden-crds

helm install --wait -n kubewarden kubewarden-controller kubewarden/kubewarden-controller

The following charts should be installed inside the kubewarden namespace in your Kubernetes cluster:

  • kubewarden-crds, which will register the ClusterAdmissionPolicy and PolicyServer Custom Resource Definitions

  • kubewarden-controller with a default configuration, and which will create a PolicyServer resource named default.

QUICK NOTE:

The default configuration values should be good enough for the majority of deployments. All options are documented here.

Main components

Kubewarden has two main components which you will interact with:

  • The PolicyServer
  • The ClusterAdmissionPolicy

Policy Server

A Kubewarden Policy Server is completely managed by the kubewarden-controller and multiple Policy Servers can be deployed in the same Kubernetes cluster.

The Policy Server is the component which executes the Kubewarden policies when requests arrive and validates them.

Default PolicyServer configuration:

apiVersion: policies.kubewarden.io/v1alpha2
kind: PolicyServer
metadata:
  name: reserved-instance-for-tenant-a
spec:
  image: ghcr.io/kubewarden/policy-server:v1.0.0
  replicas: 2
  serviceAccountName: sa
  env:
  - name: KUBEWARDEN_LOG_LEVEL
    value: debug

Overview of the attributes of the PolicyServer resource:

RequiredPlaceholderDescription
imageThe name of the container image
replicasThe number of desired instances
-serviceAccountNameThe name of the ServiceAccount to use for the PolicyServer deployment. If no value is provided, the default ServiceAccount from the namespace, where the kubewarden-controller is installed, will be used
-envThe list of environment variables
-annotationsThe list of annotations

Changing any of these attributes will lead to a rollout of the PolicyServer Deployment with the new configuration.

ClusterAdmissionPolicy

The ClusterAdmissionPolicy resource is the core of the Kubewarden stack. This resource defines how policies evaluate requests.

Enforcing policies is the most common operation which a Kubernetes administrator will perform. You can declare as many policies as you want, and each policy will target one or more specific Kubernetes resources (i.e., pods, Custom Resource). You will also specify the type of operation(s) that will be applied for the targeted resource(s). The operations available are CREATE, UPDATE, DELETE and CONNECT.

Default ClusterAdmissionPolicy configuration:

apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
  name: psp-capabilities
spec:
  policyServer: reserved-instance-for-tenant-a
  module: registry://ghcr.io/kubewarden/policies/psp-capabilities:v0.1.3
  rules:
  - apiGroups: [""]
    apiVersions: ["v1"]
    resources: ["pods"]
    operations:
    - CREATE
    - UPDATE
  mutating: true
  settings:
    allowed_capabilities:
    - CHOWN
    required_drop_capabilities:
    - NET_ADMIN

Overview of the attributes of the ClusterAdmissionPolicy resource:

RequiredPlaceholderDescription
-policy-serverIdentifies an existing PolicyServer object. The policy will be served only by this PolicyServer instance. A ClusterAdmissionPolicy that doesn't have an explicit PolicyServer, will be served by the one named default
moduleThe location of the Kubewarden policy. The following schemes are allowed:
- registry: The policy is downloaded from an OCI artifacts compliant container registry. Example: registry://<OCI registry/policy URL>
- http, https: The policy is downloaded from a regular HTTP(s) server. Example: https://<website/policy URL>
- file: The policy is loaded from a file in the computer filesystem. Example: file:///<policy WASM binary full path>
resourcesThe Kubernetes resources evaluated by the policy
operationsWhat operations for the previously given types should be forwarded to this admission policy by the API server for evaluation.
mutatingA boolean value that must be set to true for policies that can mutate incoming requests
-settingsA free-form object that contains the policy configuration values
-failurePolicyThe action to take if the request evaluated by a policy results in an error. The following options are allowed:
- Ignore: an error calling the webhook is ignored and the API request is allowed to continue
- Fail: an error calling the webhook causes the admission to fail and the API request to be rejected

The complete documentation of this Custom Resource can be found here or on docs.crds.dev.

NOTE: The ClusterAdmissionPolicy resources are registered with a * webhook scope, which means that registered webhooks will forward all requests matching the given resources and operations -- either namespaced (in any namespace), or cluster-wide resources.

NOTE: The ClusterAdmissionPolicy resource is cluster-wide. There are plans to also provide a namespaced version that will only impact registered namespaced resources on its own namespace.

Example: Enforce your first policy

For this first example, we will use the pod-privileged policy. Our goal will be to prevent the creation of privileged containers inside our Kubernetes cluster by enforcing this policy.

Let's define a ClusterAdmissionPolicy for that:

kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
  name: privileged-pods
spec:
  module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.9
  rules:
  - apiGroups: [""]
    apiVersions: ["v1"]
    resources: ["pods"]
    operations:
    - CREATE
    - UPDATE
  mutating: false
EOF

This will produce the following output:

clusteradmissionpolicy.policies.kubewarden.io/privileged-pods created

When a ClusterAdmissionPolicy is defined, the status is set to pending, and it will force a rollout of the targeted PolicyServer. In our example, it's the PolicyServer named default. You can monitor the rollout by running the following command:

kubectl get clusteradmissionpolicy.policies.kubewarden.io/privileged-pods

You should see the following output:

NAME              POLICY SERVER   MUTATING   STATUS
privileged-pods   default         false      pending

Once the new policy is ready to be served, the kubewarden-controller will register a ValidatingWebhookConfiguration object.

The ClusterAdmissionPolicy status will be set to active once the Deployment is done for every PolicyServer instance. The ValidatingWebhookConfiguration can be shown with the following command:

kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io -l kubewarden

You should see the following output:

NAME              WEBHOOKS   AGE
privileged-pods   1          9s

Once the ClusterAdmissionPolicy is active and the ValidatingWebhookConfiguration is registered, you can test the policy.

First, let's create a Pod with a Container not in privileged mode:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: unprivileged-pod
spec:
  containers:
    - name: nginx
      image: nginx:latest
EOF

This will produce the following output:

pod/unprivileged-pod created

The Pod is successfully created.

Now, let's create a Pod with at least one Container privileged flag:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: privileged-pod
spec:
  containers:
    - name: nginx
      image: nginx:latest
      securityContext:
          privileged: true
EOF

The creation of the Pod has been denied by the policy and you should see the following message:

Error from server: error when creating "STDIN": admission webhook "privileged-pods.kubewarden.admission" denied the request: User 'minikube-user' cannot schedule privileged containers

NOTE: both examples didn't define a namespace, which means the default namespace was the target. However, as you could see in the second example, the policy is still applied. As stated above, this is due to the scope being cluster-wide and not targeting a specific namespace.

Uninstall

You can remove the resources created by uninstalling the helm charts as follow:

helm uninstall --namespace kubewarden kubewarden-controller

helm uninstall --namespace kubewarden kubewarden-crds

Once the helm charts have been uninstalled, you can remove the Kubernetes namespace that was used to deploy the Kubewarden stack:

kubectl delete namespace kubewarden

Note: kubewarden contains a helm pre-delete hook that will remove all PolicyServers and kubewarden-controller. Then the kubewarden-controller will delete all resources, so it is important that kubewarden-controller is running when helm uninstall is executed.

ValidatingWebhookConfigurations and MutatingWebhookConfigurations created by kubewarden should be deleted, this can be checked with:

kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io -l "kubewarden"

kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io -l "kubewarden"

If these resources are not automatically removed, you can remove them manually by using the following command:

kubectl delete -l "kubewarden" validatingwebhookconfigurations.admissionregistration.k8s.io

kubectl delete -l "kubewarden" mutatingwebhookconfigurations.admissionregistration.k8s.io

Wrapping up

As we have seen, the ClusterAdmissionPolicy resource is the core type that a cluster operator has to manage, the rest of the resources needed to run the policies and configure them will be taken care of automatically by the kubewarden-controller module.

Now, you are ready to deploy Kubewarden and you can have a look at the policies in hub.kubewarden.io, on Github, or reuse existing Rego policies as shown in the following chapters!

Common Tasks

This page lists a set of tasks that can be performed after you install Kubewarden in your Kubernetes cluster.

Each task can be done separately; however, if you're not familiar with Kubewarden, or Kubernetes policies in general, we recommend that you follow the tasks below in sequential order.

Test Policies

Kubewarden has two main tools to help you find policies and test them locally:

Kubewarden Policy Hub

The Kubewarden Policy Hub hosts policies contributed by the community. For example, you can find substitutes to the deprecated Kubernetes Pod Security Policies, created by the Kubewarden developers.

As shown in the picture below, once you find the policy to be tested, you can copy the registry path1 or download2 the Wasm binary containing the policy and additional metadata:

Kubewarden Policy Hub

Once you have the policy Wasm binary or the registry path, you can test it with kwctl.

kwctl CLI tool

kwctl is a Command Line Interface (CLI) tool that will allow both the policy authors and the cluster administrators to test policies before they are applied to the Kubernetes cluster.

The user experience (UX) of this tool is intended to be easy and intuitive like the docker CLI tool.

Use cases

Depending on your role, kwctl will help you in the following non-exhaustive scenarios:

As a policy author

  • End-to-end testing of your policy: Test your policy against crafted Kubernetes requests and ensure your policy behaves as you expect. You can even test context-aware policies that require access to a running cluster.
  • Embed metadata in your Wasm module: the binary contains annotations of the permissions it needs to be executed
  • Publish policies to OCI registries: The binary is a fully compliant OCI object and can be stored in OCI registries.

As a cluster administrator

  • Inspect remote policies: Given a policy in an OCI registry or in an HTTP server, show all static information about the policy.
  • Dry-run of a policy in your cluster: Test the policy against crafted Kubernetes requests and ensure the policy behaves as you expect given the input data you provide. You can even test context-aware policies that require access to a running cluster, also in a dry-run mode.
  • Generate initial ClusterAdmissionPolicy scaffolding for your policy: Generate a YAML file with all the required settings, which can be applied to your Kubernetes cluster using kubectl.

Installation

kwctl binaries for the stable releases are directly available from the GitHub repository.

NOTE: If you want to build kwctl from the development branch, you need to install Rust. And for building kwctl, please refer to the Build kwctl from source section in the GitHub repo.

Usage

As stated above, kwctl will allow you to perform an end-to-end testing of the policies.

You can list all the kwctl options and subcommands by running the following command:

$ kwctl --help
kwctl 0.2.4
Flavio Castelli <fcastelli@suse.com>:Rafael Fernández López <rfernandezlopez@suse.com>
Tool to manage Kubewarden policies

USAGE:
    kwctl [FLAGS] <SUBCOMMAND>

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information
    -v               Increase verbosity

SUBCOMMANDS:
    annotate       Adds Kubewarden metadata to a WebAssembly module
    completions    Generates shell completions
    help           Prints this message or the help of the given subcommand(s)
    inspect        Inspects Kubewarden policy
    manifest       Scaffolds a Kubernetes resource
    policies       Lists all downloaded policies
    pull           Pulls a Kubewarden policy from a given URI
    push           Pushes a Kubewarden policy to an OCI registry
    rm             Removes a Kubewarden policy from the store
    run            Runs a Kubewarden policy from a given URI
    verify         Verifies a Kubewarden policy from a given URI using Sigstore

Here are a few examples of the commands you should run, depending on the task you want to perform:

  • List the policies: lists all the policies stored in the local kwctl registry

    • Command: kwctl policies
  • Obtain the policy: download and store the policy inside the local kwctl store

    • Command: kwctl pull <policy URI>
    • Example:
    $ kwctl pull registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.9
    
    $ kwctl policies
    +--------------------------------------------------------------+----------+---------------+--------------+----------+
    | Policy                                                       | Mutating | Context aware | SHA-256      | Size     |
    +--------------------------------------------------------------+----------+---------------+--------------+----------+
    | registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.9 | no       | no            | 59e34f482b40 | 21.86 kB |
    +--------------------------------------------------------------+----------+---------------+--------------+----------+
    
  • Understand how the policy works: inspect the policy metadata

    • Command: kwctl inspect <policy URI>
    • Example:
      $ kwctl inspect registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.9
      Details
      title:              pod-privileged
      description:        Limit the ability to create privileged containers
      author:             Flavio Castelli
      url:                https://github.com/kubewarden/pod-privileged-policy
      source:             https://github.com/kubewarden/pod-privileged-policy
      license:            Apache-2.0
      mutating:           false
      context aware:      false
      execution mode:     kubewarden-wapc
      protocol version:   1
      
      Annotations
      io.kubewarden.kwctl 0.1.9
      
      Rules
      ────────────────────
      ---
      - apiGroups:
          - ""
        apiVersions:
          - v1
        resources:
          - pods
        operations:
          - CREATE
      ────────────────────
      
      Usage
      This policy doesn't have a configuration. Once enforced, it will reject
      the creation of Pods that have at least a privileged container defined.
    
  • Evaluate the policy: Assess the policy and, if available, find the right configuration values to match your requirements.

    NOTE: Familiarity with Kubernetes REST APIs is a prerequisite.

    • Command: kwctl run -r <"Kubernetes Admission request" file path> -s <"JSON document" file path> <policy URI>

    • Scenario 1:

      • Request to be evaluated: Create a pod with no 'privileged' container

      • Example:

        $ kwctl run registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.9 -r unprivileged-pod-request.json
        {"uid":"C6E115F4-A789-49F8-B0C9-7F84C5961FDE","allowed":true,"status":{"message":""}}
        
        • Equivalent command with the policy binary downloaded:

          `$ kwctl run file://$PWD/pod-privileged-policy.wasm -r unprivileged-pod-request.json
          {"uid":"C6E115F4-A789-49F8-B0C9-7F84C5961FDE","allowed":true,"status":{"message":""}}
          
      • Result: The policy allows the request

    • Scenario 2:

      • Request to be evaluated: Create a pod with at least one 'privileged' container

      • Command:

        kwctl run registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.9 -r privileged-pod-request.json
        
        • Equivalent command with the policy binary downloaded: kwctl run file://$PWD/pod-privileged-policy.wasm -r privileged-pod-request.json
      • Output:

        {"uid":"8EE6AF8C-C8C8-45B0-9A86-CB52A70EC50D","allowed":false,"status":{"message":"User 'kubernetes-admin' cannot schedule privileged containers"}}
        
      • Result: The policy denies the request

      NOTE: If you want to see a more complex example, you can read the Kubewarden blog post Introducing kwctl to Kubernetes Administrators.

Enforce Policies

As described in the Quick Start, you can enforce a policy by defining a ClusterAdmissionPolicy and then deploy it to your cluster using kubectl.

kwctl will help to generate a ClusterAdmissionPolicy from the policy you want to enforce.

After you have generated the ClusterAdmissionPolicy and applied it to your Kubernetes cluster, you can follow the steps described in the Quick Start below:

  • Generate the ClusterAdmissionPolicy from the policy manifest and save it to a file

    • Command: kwctl manifest -t ClusterAdmissionPolicy <policy URI> > <"policy name".yaml>
    • Example:
    $ kwctl manifest -t ClusterAdmissionPolicy registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.9
    ---
    apiVersion: policies.kubewarden.io/v1alpha2
    kind: ClusterAdmissionPolicy
    metadata:
      name: privileged-pods
    spec:
      module: "registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.9"
      settings: {}
      rules:
        - apiGroups:
            - ""
          apiVersions:
            - v1
          resources:
            - pods
          operations:
            - CREATE
      mutating: false
    

    TIP: By default, the name value is set to generated-policy. You might want to edit it before you deploy the ClusterAdmissionPolicy.

    NOTE: To avoid confusion, the value above has been set to privileged-pods.

  • Deploy the ClusterAdmissionPolicy to your Kubernetes cluster

    • Command: kubectl apply -f <"policy name".yaml>
    • Example:
    $ kubectl apply -f pod-privileged-policy.yaml
    clusteradmissionpolicy.policies.kubewarden.io/privileged-pods created
    

Once the ClusterAdmissionPolicy is deployed, the requests sent to your Kubernetes cluster will be evaluated by the policy if they're within the policy scope.

Next steps

Write Policies

The Writing Policies section explains how to write policies in different languages and how to export them into Webassembly so that they can be interpreted by Kubewarden.

Distribute Policies

The Distributing Policies section explains how to publish policies to OCI registries.

Architecture

Kubewarden is a Kubernetes policy engine that uses policies written using WebAssembly.

The Kubewarden stack is made by the following components:

  • Kubewarden Custom Resources: these are Kubernetes Custom Resources that simplify the process of managing policies.

  • kubewarden-controller: this is a Kubernetes controller that reconciles Kubewarden's Custom Resources. This component creates parts of the Kubewarden stack and, most important of all, translates Kubewarden's concepts into native Kubernetes directives.

  • Kubewarden policies: these are WebAssembly modules that hold the validation or mutation logic. These are covered in depth inside of this chapter.

  • policy-server: this component receives the requests to be validated. It does that by executing Kubewarden's policies.

At the bottom of the stack, Kubewarden's integrates with Kubernetes using the concept of Dynamic Admission Control. In particular, Kubewarden operates as a Kubernetes Admission Webhook. policy-server is the actual Webhook endpoint that is reached by Kubernetes API server to validate relevant requests.

Kubernetes is made aware of the existence of Kubewarden's Webhook endpoints by kubewarden-controller. This is done by registering either a MutatingWebhookConfiguration or a ValidatingWebhookConfiguration object.

This diagram shows the full architecture overview of a cluster running the Kubewarden stack:

Full architecture

Journey of a Kubewarden policy

The architecture diagram from above can be intimidating at first, this section explains it step by step.

Default Policy Server

On a fresh new cluster, the Kubewarden components defined are its Custom Resource Definitions, the kubewarden-controller Deployment and a PolicyServer Custom Resource named default.

Defining the first ClusterAdmissionPolicy resource

kubewarden-controller notices the default PolicyServer resource and, as a result of that, it creates a Deployment of the policy-server component.

As stated above, Kubewarden works as a Kubernetes Admission Webhook. Kubernetes dictates that all the Webhook endpoints must be secured with TLS. kubewarden-controller takes care of setting up this secure communication by doing these steps:

  1. Generate a self-signed Certificate Authority
  2. Use this CA to generate a TLS certificate and a TLS key for the policy-server Service.

All these objects are stored into Kubernetes as Secret resources.

Finally, kubewarden-controller will create the policy-server Deployment and a Kubernetes ClusterIP Service to expose it inside of the cluster network.

Defining the first policy

This chart shows what happens when the first policy bounded to the default policy-server is defined inside of the cluster:

Defining the first ClusterAdmissionPolicy resource

kubewarden-controller notices the new ClusterAdmissionPolicy resource and, as a result of that, it finds the bounded PolicyServer and reconciles it.

Reconciliation of policy-server

When a ClusterAdmissionPolicy is created, modified or deleted a reconciliation loop for the PolicyServer that owns the policy is triggered inside the kubewarden-controller. In this reconciliation loop, a ConfigMap with all the polices bounded to the PolicyServer is created. Then the a Deployment rollout of the interested policy-server is started. As a result of that, the new policy-server instance will be started with the updated configuration.

At start time, policy-server reads its configuration and downloads all the Kubewarden policies. Policies can be downloaded from remote endpoints like HTTP(s) servers and container registries.

Policies' behaviour can be tuned by users via policy-specific configuration parameters. Once all the policies are downloaded, policy-server will ensure the policy settings provided by the user are valid.

policy-server performs the validation of policies's settings by invoking the validate_setting function exposed by each policy. This topic is covered more in depth inside of this section of the documentation.

policy-server will exit with an error if one or more policies received wrong configuration parameters from the end user.

If all the policies are properly configured, policy-server will spawn a pool of worker threads to evaluate incoming requests using the Kubewarden policies specified by the user.

Finally, policy-server will start a HTTPS server that listens to incoming validation requests. The web server is secured using the TLS key and certificate that have been previously created by kubewarden-controller.

Each policy is exposed by the web server via a dedicated path that follows this naming convention: /validate/<policy ID>.

This is how the cluster looks like once the initialization of policy-server is completed:

policy-server initialized

Making Kubernetes aware of the policy

The policy-server Pods have a Readiness Probe, kubewarden-controller relies on that to know when the policy-server Deployment is ready to evaluate admission reviews.

Once the policy-server Deployment is marked as Ready, kubewarden-controller will make the Kubernetes API server aware of the new policy by creating either a MutatingWebhookConfiguration or a ValidatingWebhookConfiguration object.

Each policy has its dedicated MutatingWebhookConfiguration/ValidatingWebhookConfiguration which points to the Webhook endpoint served by policy-server. The endpoint is reachable by the /validate/<policy ID> URL mentioned before.

Kubernetes Webhook endpoint configuration

Policy in action

Now that all the plumbing has been done, Kubernetes will start sending the relevant Admission Review requests to the right policy-server endpoint.

Policy in action

policy-server receives the Admission Request object and, based on the endpoint that received the request, uses the right policy to evaluate it.

Each policy is evaluated inside of its own dedicated WebAssembly sandbox. The communication between policy-server (the "host") and the WebAssembly policy (the "guest") is done using the waPC communication protocol. This is covered in depth inside of this section of the documentation.

How multiple policy servers and policies are handled

A cluster can have multiple policy servers and Kubewarden policies defined.

Benefits of having multiple policy servers:

  • Noisy Namespaces/Tenants generating lots of policy evaluations can be isolated from the rest of the cluster and do not affect other users.
  • Mission critical policies can be run inside of a Policy Server "pool", making your whole infrastructure more resilient.

Each policy-server is defined via its own PolicyServer resource and each policy is defined via its own ClusterAdmissionPolicy resource.

This leads back to the initial diagram:

Full architecture

A ClusterAdmissionPolicy is bounded to a PolicyServer. ClusterAdmissionPolicies that don't specify any PolicyServer will be bounded to the PolicyServer named default. If a ClusterAdmissionPolicy references a PolicyServer that doesn't exist, it will be in an unschedulable state.

Each policy-server defines multiple validation endpoints, one per policy defined inside of its configuration file. It's also possible to load the same policy multiple times, just with different configuration parameters.

The Kubernetes API server is made aware of these policy via the ValidatingWebhookConfiguration and MutatingWebhookConfiguration resources that are kept in sync by kubewarden-controller.

Finally, the incoming admission requests are then dispatched by the Kubernetes API server to the right validation endpoint exposed by policy-server.

What is a Kubewarden policy

In this section we will explain what Kubewarden policies are by using some traditional computing analogies.

A Kubewarden policy can be seen as a regular program that does one job: it receives input data, performs some computation against that and it finally returns a response.

The input data are Kubernetes admission requests and the result of the computation is a validation response, something that tells to Kubernetes whether to accept, reject or mutate the original input data.

All these operations are performed by a component of Kubewarden that is called policy-server.

The policy server doesn't bundle any data processing capability. All these capabilities are added at runtime via add-ons: the Kubewarden policies.

As a consequence, a Kubewarden policy can be seen as a traditional plug-in of the "policy server" program.

To recap:

  • Kubewarden policies are plug-ins that expose a set of well-defined functionalities (validate a Kubernetes request object, validate policy settings provided by the user,...) using a well-defined API
  • Policy server is the "main" program that loads the plug-ins (aka policies) and leverages their exposed functionalities to validate or mutate Kubernetes requests

Writing Kubewarden policies consists of writing the validation business logic and then exposing it through a well-defined API.

Programming language requirements

Kubewarden policies are delivered as WebAssembly binaries.

Policy authors can write policies using any programming language that supports WebAssembly as a compilation target. The list of supported language is constantly evolving, this page provides a nice overview of the WebAssembly landscape.

Currently WebAssembly doesn't have an official way to share complex data types between the host and a WebAssembly guest. To overcome this limitation Kubewarden policies leverage the waPC project, which provides a bi-directional communication channel.

Because of that your programming language of choice must provide a waPC guest SDK. If that's not the case, feel free to reach out. We can help you overcome this limitation.

Policy communication specification

The policy evaluator interacts with Kubewarden policies using a well defined API. The purpose of this section is to document the API used by the host ( be it policy-server or kwctl) to communicate with Kubewarden's policies.

Note well: this section of the documentation is a bit low level, you can jump straight to one of the "language focused" chapters and come back to this chapter later.

Policy settings

Policy behaviour is not set in stone, it can be configured by providing configuration details to the policy at runtime. The policy author has full freedom to define the structure of the policy settings.

Kubewarden takes care of serializing the policy settings into JSON and provide them to the policy every time it is invoked.

Settings validation

Some policies might want to validate the settings a user provides to ensure they are correct.

Each policy must register a waPC function called validate_settings that takes care of validating the policy settings.

The validate_settings function receives as input a JSON representation of the settings provided by the user. The function validates them and returns as a response a SettingsValidationResponse object.

The structure of the SettingsValidationResponse object is the following one:

{
  // mandatory
  "valid": <boolean>,

  // optional, ignored if accepted - recommended for rejections
  "message": <string>,
}

If the user provided settings are valid, the contents of message are ignored. Otherwise the contents of message are shown to the user.

Note well: Kubewarden's policy-server validates all the policy settings provided by users at start time. The policy-server exits immediately with an error if at least one of its policies received wrong configuration parameters.

Example

Let's take as an example the psp-capabilities policy which has the following configuration format:

allowed_capabilities:
- CHOWN

required_drop_capabilities:
- NET_ADMIN

default_add_capabilities:
- KILL

The validate_settings function will receive as input the following JSON document:

{
  "allowed_capabilities": [
    "CHOWN"
  ],
  "required_drop_capabilities": [
    "NET_ADMIN"
  ],
  "default_add_capabilities": [
    "KILL"
  ]
}

Recap

Each policy must register a waPC function called validate_settings that has the following API:

waPC function name Input payload Output payload
validate_settings
{
  // your policy configuration
}
{
  // mandatory
  "validate": <boolean>,
// optional, ignored if accepted // recommended for rejections "message": <string>, }

Validating policies

The Kubewarden policy server receives AdmissionReview objects from the Kubernetes API server. It then forwards the value of its request attribute (of type AdmissionRequest key to the policy to be evaluated.

The policy has to evaluate the request and state whether it should be accepted or not. When the request is rejected, the policy might provide the explanation message and a specific error code that is going to be shown to the end user.

By convention of the policy-server project, the guest has to expose a function named validate, exposed through the waPC guest SDK, so that the policy-server (waPC host) can invoke it.

The validate function receives a ValidationRequest object serialized as JSON and returns a ValidationResponse object serialized as JSON.

The ValidationRequest object

The ValidationRequest is a simple JSON object that is received by the validate function. It looks like this:

{
  "request": <AdmissionReview.request data>,
  "settings": {
     // your policy configuration
  }
}

The settings key points to a free-form JSON document that can hold the policy specific settings. The previous chapter focused on policies and settings.

A concrete example

Given the following Kubernetes AdmissionReview:

{
  "apiVersion": "admission.k8s.io/v1",
  "kind": "AdmissionReview",
  "request": {
    # Random uid uniquely identifying this admission call
    "uid": "705ab4f5-6393-11e8-b7cc-42010a800002",

    # Fully-qualified group/version/kind of the incoming object
    "kind": {"group":"autoscaling","version":"v1","kind":"Scale"},
    # Fully-qualified group/version/kind of the resource being modified
    "resource": {"group":"apps","version":"v1","resource":"deployments"},
    # subresource, if the request is to a subresource
    "subResource": "scale",

    # Fully-qualified group/version/kind of the incoming object in the original request to the API server.
    # This only differs from `kind` if the webhook specified `matchPolicy: Equivalent` and the
    # original request to the API server was converted to a version the webhook registered for.
    "requestKind": {"group":"autoscaling","version":"v1","kind":"Scale"},
    # Fully-qualified group/version/kind of the resource being modified in the original request to the API server.
    # This only differs from `resource` if the webhook specified `matchPolicy: Equivalent` and the
    # original request to the API server was converted to a version the webhook registered for.
    "requestResource": {"group":"apps","version":"v1","resource":"deployments"},
    # subresource, if the request is to a subresource
    # This only differs from `subResource` if the webhook specified `matchPolicy: Equivalent` and the
    # original request to the API server was converted to a version the webhook registered for.
    "requestSubResource": "scale",

    # Name of the resource being modified
    "name": "my-deployment",
    # Namespace of the resource being modified, if the resource is namespaced (or is a Namespace object)
    "namespace": "my-namespace",

    # operation can be CREATE, UPDATE, DELETE, or CONNECT
    "operation": "UPDATE",

    "userInfo": {
      # Username of the authenticated user making the request to the API server
      "username": "admin",
      # UID of the authenticated user making the request to the API server
      "uid": "014fbff9a07c",
      # Group memberships of the authenticated user making the request to the API server
      "groups": ["system:authenticated","my-admin-group"],
      # Arbitrary extra info associated with the user making the request to the API server.
      # This is populated by the API server authentication layer and should be included
      # if any SubjectAccessReview checks are performed by the webhook.
      "extra": {
        "some-key":["some-value1", "some-value2"]
      }
    },

    # object is the new object being admitted.
    # It is null for DELETE operations.
    "object": {"apiVersion":"autoscaling/v1","kind":"Scale",...},
    # oldObject is the existing object.
    # It is null for CREATE and CONNECT operations.
    "oldObject": {"apiVersion":"autoscaling/v1","kind":"Scale",...},
    # options contains the options for the operation being admitted, like meta.k8s.io/v1 CreateOptions, UpdateOptions, or DeleteOptions.
    # It is null for CONNECT operations.
    "options": {"apiVersion":"meta.k8s.io/v1","kind":"UpdateOptions",...},

    # dryRun indicates the API request is running in dry run mode and will not be persisted.
    # Webhooks with side effects should avoid actuating those side effects when dryRun is true.
    # See http://k8s.io/docs/reference/using-api/api-concepts/#make-a-dry-run-request for more details.
    "dryRun": false
  }
}

The ValidationRequest object would look like that:

{
  "request": {
    # Random uid uniquely identifying this admission call
    "uid": "705ab4f5-6393-11e8-b7cc-42010a800002",

    # Fully-qualified group/version/kind of the incoming object
    "kind": {"group":"autoscaling","version":"v1","kind":"Scale"},
    # Fully-qualified group/version/kind of the resource being modified
    "resource": {"group":"apps","version":"v1","resource":"deployments"},
    # subresource, if the request is to a subresource
    "subResource": "scale",

    # Fully-qualified group/version/kind of the incoming object in the original request to the API server.
    # This only differs from `kind` if the webhook specified `matchPolicy: Equivalent` and the
    # original request to the API server was converted to a version the webhook registered for.
    "requestKind": {"group":"autoscaling","version":"v1","kind":"Scale"},
    # Fully-qualified group/version/kind of the resource being modified in the original request to the API server.
    # This only differs from `resource` if the webhook specified `matchPolicy: Equivalent` and the
    # original request to the API server was converted to a version the webhook registered for.
    "requestResource": {"group":"apps","version":"v1","resource":"deployments"},
    # subresource, if the request is to a subresource
    # This only differs from `subResource` if the webhook specified `matchPolicy: Equivalent` and the
    # original request to the API server was converted to a version the webhook registered for.
    "requestSubResource": "scale",

    # Name of the resource being modified
    "name": "my-deployment",
    # Namespace of the resource being modified, if the resource is namespaced (or is a Namespace object)
    "namespace": "my-namespace",

    # operation can be CREATE, UPDATE, DELETE, or CONNECT
    "operation": "UPDATE",

    "userInfo": {
      # Username of the authenticated user making the request to the API server
      "username": "admin",
      # UID of the authenticated user making the request to the API server
      "uid": "014fbff9a07c",
      # Group memberships of the authenticated user making the request to the API server
      "groups": ["system:authenticated","my-admin-group"],
      # Arbitrary extra info associated with the user making the request to the API server.
      # This is populated by the API server authentication layer and should be included
      # if any SubjectAccessReview checks are performed by the webhook.
      "extra": {
        "some-key":["some-value1", "some-value2"]
      }
    },

    # object is the new object being admitted.
    # It is null for DELETE operations.
    "object": {"apiVersion":"autoscaling/v1","kind":"Scale",...},
    # oldObject is the existing object.
    # It is null for CREATE and CONNECT operations.
    "oldObject": {"apiVersion":"autoscaling/v1","kind":"Scale",...},
    # options contains the options for the operation being admitted, like meta.k8s.io/v1 CreateOptions, UpdateOptions, or DeleteOptions.
    # It is null for CONNECT operations.
    "options": {"apiVersion":"meta.k8s.io/v1","kind":"UpdateOptions",...},

    # dryRun indicates the API request is running in dry run mode and will not be persisted.
    # Webhooks with side effects should avoid actuating those side effects when dryRun is true.
    # See http://k8s.io/docs/reference/using-api/api-concepts/#make-a-dry-run-request for more details.
    "dryRun": false
  },
  "settings": {
    # policy settings
  }
}

The ValidationResponse object

The validate function returns the outcome of its validation using a ValidationResponse object.

The ValidationResponse is structured in the following way:

{
  // mandatory
  "accepted": <boolean>,

  // optional, ignored if accepted - recommended for rejections
  "message": <string>,

  // optional, ignored if accepted
  "code": <integer>,

  // optional, used by mutation policies
  "mutated_object": <string>
}

The message and code attributes can be specified when the request is not accepted. message is a free form textual error. code represents an HTTP error code.

If the request is accepted, message and code values will be ignored by the Kubernetes API server if they are present.

If message or code are provided, and the request is not accepted, the Kubernetes API server will forward this information as part of the body of the error returned to the Kubernetes API server client that issued the rejected request.

The mutated_object is an optional field used only by mutating policies. This is going to be covered inside of the next chapter.

Recap

These are the functions a validating policy must implement:

waPC function name Input payload Output payload
validate
{
  "request": {
    // AdmissionReview.request data
  },
  "settings": {
    // your policy configuration
  }
}
{
  // mandatory
  "accepted": <boolean>,
// optional, ignored if accepted //recommended for rejections "message": <string>,
// optional, ignored if accepted "code": <integer> }
validate_settings
{
  // your policy configuration
}
{
  // mandatory
  "validate": <boolean>,
// optional, ignored if accepted // recommended for rejections "message": <string>, }

Mutating policies

Mutation policies are structured in the very same was as validating ones:

  • They have to register a validate and a validate_settings waPC functions
  • The communication API used between the host and the policy is the very same as the one used by validating policies.

Mutating policies can accept a request and propose a mutation of the incoming object by returning a ValidationResponse object that looks like that:

```json
{
  "accepted": true,
  "mutated_object": <object to be created>
}

The mutated_object field contains the object the policy wants to be created inside of the Kubernetes cluster serialized to JSON.

A concrete example

Let's assume the policy received ValidationRequest:

{
  "settings": {},
  "request": {
    "operation": "CREATE",
    "object": {
      "apiVersion": "v1",
      "kind": "Pod",
      "metadata": {
        "name": "security-context-demo-4"
      },
      "spec": {
        "containers": [
        {
          "name": "sec-ctx-4",
          "image": "gcr.io/google-samples/node-hello:1.0",
          "securityContext": {
            "capabilities": {
              "add": ["NET_ADMIN", "SYS_TIME"]
            }
          }
        }
        ]
      }
    }
  }
}

Note well: we left some irrelevant fields out of the request object.

This request is generated because someone tried to create a Pod that would look like that:

apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo-4
spec:
  containers:
  - name: sec-ctx-4
    image: gcr.io/google-samples/node-hello:1.0
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
        - SYS_TIME

Let's assume our policy replies with the following ValidationResponse:

{
  "accepted": true,
  "mutated_object": {
    "apiVersion": "v1",
    "kind": "Pod",
    "metadata": {
      "name": "security-context-demo-4"
    },
    "spec": {
      "containers": [
        {
          "name": "sec-ctx-4",
          "image": "gcr.io/google-samples/node-hello:1.0",
          "securityContext": {
            "capabilities": {
              "add": [
                "NET_ADMIN",
                "SYS_TIME"
              ],
              "drop": [
                "BPF"
              ]
            }
          }
        }
      ]
    }
  }
}

That would lead to the request being accepted, but the final Pod would look like that:

apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo-4
spec:
  containers:
  - name: sec-ctx-4
    image: gcr.io/google-samples/node-hello:1.0
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
        - SYS_TIME
        drop:
        - BPF

As you can see the policy altered the securityContext.capabilities.drop section of the only container declared inside of the Pod.

The container is now dropping the BPF capability thanks to our policy.

Recap

These are the functions a mutating policy must implement:

waPC function name Input payload Output payload
validate
{
  "request": {
    // AdmissionReview.request data
  },
  "settings": {
    // your policy configuration
  }
}
{
  // mandatory
  "accepted": <boolean>,
// optional, ignored if accepted // recommended for rejections "message": <string>,
// optional, ignored if accepted "code": <integer>,
// JSON Object to be created // Can be used only when the request is accepted "mutated_object": <object> }
validate_settings
{
  // your policy configuration
}
{
  // mandatory
  "validate": <boolean>,
// optional, ignored if accepted // recommended for rejections "message": <string>, }

Context aware policies

NOTE: This feature is a work in progress, and not to be depended upon. Features described here are incomplete and subject to change at any time until the feature stabilizes.

Feedback is highly appreciated.

The policy-server has capabilities to expose cluster information to policies, so that they can take decisions based on other existing resources, and not only based on the resource they are evaluated in isolation.

The policy-server being a deployment, is deployed on the Kubernetes cluster with a specific service account, that is able to list and watch a subset of Kubernetes resources, meaning:

  • Namespaces
  • Services
  • Ingresses

This information is exposed to policies (waPC guests) through a well known procedure call set of endpoints, that allows to retrieve this cached information.

The result of these procedure calls is the JSON-encoded result of the list of resources, as provided by Kubernetes.

The policy-server will be the component responsible for refreshing this information when it considers, and the policy always retrieves the latest available information exposed by the policy-server.

Accessing the cluster context

Language SDK's that support cluster context at this time will expose functions that allow policies to retrieve the current state of the cluster.

The workflow we have seen until now has been that the policy exposes well known waPC functions, namely: validate and validate_settings. At some point, the host will call these functions when it requires either to validate a request, or to validate the settings that were provided to it for the given policy.

In this case, after the host calls to the validate waPC function in the guest, the guest is able to retrieve cluster information in order to produce a response to the validate waPC function that was called by the host on the guest.

This guest-host intercommunication is performed using the regular waPC host calling mechanism, and so any guest implementing the waPC intercommunication mechanism is able to request this information from the host.

waPC has the following function arguments when performing a call from the guest to the host:

  • Binding
  • Namespace
  • Operation
  • Payload

By contract, or convention, policies can retrieve the Kubernetes cluster information by calling the host in the following ways:

Binding Namespace Operation Input payload Output payload (JSON format)
kubernetes ingresses list N/A Result of GET /apis/networking.k8s.io/v1/ingresses
kubernetes namespaces list N/A Result of GET /apis/v1/namespaces
kubernetes services list N/A Result of GET /apis/v1/services

The request the waPC guest performs to the host is local, and served from a cache -- this request does not get forwarded to the Kubernetes API server. The policy-server (host) decides when to refresh this information from the Kubernetes API server.

NOTE: This is a proof-of-concept at this time, there are plans to generalize what resources can be fetched from the cluster, to include all built-in Kubernetes types, as well as custom resources.

Rust

Rust is the most mature programming language that can generate WebAssembly modules: WebAssembly is a first-class citizen in the Rust world. That means many of the tools and crates of the Rust ecosystem work out of the box.

Kubewarden provides a Rust SDK that simplifies the process of writing policies, plus a template project to quickly scaffold a policy project using the cargo-generate utility.

This document illustrates how to take advantage of these projects to write Kubewarden policies using the Rust programming language.

Note well, we won't cover the details of Kubewarden's Rust SDK inside of this page. These can be found inside of the official crate documentation.

Getting Rust dependencies

This section guides you through the process of installing the Rust compiler and its dependencies.

As a first step install the Rust compiler and its tools, this can be easily done using rustup. Please follow rustup's install documentation.

Once rustup is installed add the Wasm target:

rustup target add wasm32-unknown-unknown

OSX specific dependencies

In order to use cargo-generate you will need to add the Xcode tool set. If it isn't installed through Xcode the following command will give you the dependencies needed:

xcode-select --install

Creating a new validation policy

We are going to create a simple validation policy that processes Pod creation requests.

The policy will look at the metadata.name attribute of the Pod and reject the pods that have an invalid name. We want the list of invalid names to be configurable by the end users of the policy.

To summarize, the policy settings will look like that:

invalid_names:
- bad_name1
- bad_name2

The policy will accept the creation of a Pod like the following one:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
    - name: nginx
      image: nginx:latest

While it will reject the creation of a Pod like the following one:

apiVersion: v1
kind: Pod
metadata:
  name: bad_name1
spec:
  containers:
    - name: nginx
      image: nginx:latest

Scaffolding new policy project

The creation of a new policy project can be done by feeding this template project into cargo generate.

First, install cargo-generate. Note, this requires openssl-devel.

cargo install cargo-generate

Now scaffold the project as follows:

cargo generate --git https://github.com/kubewarden/rust-policy-template \
               --branch main \
               --name demo

The command will produce the following output:

🔧   Creating project called `demo`...
✨   Done! New project created /home/flavio/hacking/kubernetes/kubewarden/demo

The new policy project can now be found inside of the demo directory.

Note: if you plan to make use of the GitHub container registry functionality in the demo, you will need to enable improved container support.

Defining policy settings

As a first step we will define the structure that holds the policy settings.

Open the src/settings.rs file and change the definition of the Settings struct to look like that:

use std::collections::HashSet;

#[derive(Deserialize, Default, Debug, Serialize)]
#[serde(default)]
pub(crate) struct Settings {
    pub invalid_names: HashSet<String>,
}

This will automatically put the list of invalid names inside of a Set collection.

Next we will write a settings validation function: we want to ensure the policy is always run with at least one invalid name.

This can be done by changing the implementation of the Validatable trait.

Change the scaffolded implementation defined inside of src/settings.rs to look like that:

impl kubewarden::settings::Validatable for Settings {
    fn validate(&self) -> Result<(), String> {
        if self.invalid_names.is_empty() {
            Err(String::from("No invalid name specified. Specify at least one invalid name to match"))
        } else {
            Ok(())
        }
    }
}

Add unit tests

Now we can write a unit test to ensure the settings validation is actually working. This can be done in the usual Rust way.

There are already some default tests at the bottom of the src/settings.rs file. Replace the automatically generated code to look like that:

#[cfg(test)]
mod tests {
    use super::*;

    use kubewarden_policy_sdk::settings::Validatable;

    #[test]
    fn accept_settings_with_a_list_of_invalid_names() -> Result<(), ()> {
        let mut invalid_names = HashSet::new();
        invalid_names.insert(String::from("bad_name1"));
        invalid_names.insert(String::from("bad_name2"));

        let settings = Settings { invalid_names };

        assert!(settings.validate().is_ok());
        Ok(())
    }

    #[test]
    fn reject_settings_without_a_list_of_invalid_names() -> Result<(), ()> {
        let invalid_names = HashSet::<String>::new();
        let settings = Settings { invalid_names };

        assert!(settings.validate().is_err());
        Ok(())
    }
}

We can now run the unit tests by doing:

cargo test

This will produce an output similar to the following one:

  Compiling demo v0.1.0 (/home/flavio/hacking/kubernetes/kubewarden/demo)
    Finished test [unoptimized + debuginfo] target(s) in 4.19s
     Running target/debug/deps/demo-24670dd6a538fd72

running 2 tests
test settings::tests::accept_settings_with_a_list_of_invalid_names ... ok
test settings::tests::reject_settings_without_a_list_of_invalid_names ... ok

test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Writing the validation logic

It's time to write the actual validation code. This is defined inside of the src/lib.rs file. Inside of this file you will find a function called validate.

The scaffolded function is already doing something:

fn validate(payload: &[u8]) -> CallResult {
    // NOTE 1
    let validation_request: ValidationRequest<Settings> = ValidationRequest::new(payload)?;

    // NOTE 2
    match serde_json::from_value::<apicore::Pod>(validation_request.request.object) {
        Ok(pod) => {
            // NOTE 3
            if pod.metadata.name == Some("invalid-pod-name".to_string()) {
                kubewarden::reject_request(
                    Some(format!("pod name {:?} is not accepted", pod.metadata.name)),
                    None,
                )
            } else {
                kubewarden::accept_request()
            }
        }
        Err(_) => {
            // NOTE 4
            // We were forwarded a request we cannot unmarshal or
            // understand, just accept it
            kubewarden::accept_request()
        }
    }
}

This is a walk-through the code described above:

  1. Parse the incoming payload into a ValidationRequest<Setting> object. This automatically populates the Settings instance inside of ValidationRequest with the params provided by the user.
  2. Convert the Kubernetes raw JSON object embedded into the request into an instance of the Pod struct
  3. The request contains a Pod object, the code approves only the requests that do not have metadata.name equal to the hard-coded value invalid-pod-name
  4. The request doesn't contain a Pod object, hence the policy accepts the request

As you can see the code is already doing a validation that resembles the one we want to implement. We just have to get rid of the hard-coded value and use the values provided by the user via the policy settings.

This can be done with the following code:

fn validate(payload: &[u8]) -> CallResult {
    let validation_request: ValidationRequest<Settings> = ValidationRequest::new(payload)?;

    match serde_json::from_value::<apicore::Pod>(validation_request.request.object) {
        Ok(pod) => {
            let pod_name = pod.metadata.name.unwrap_or_default();
            if validation_request
                .settings
                .invalid_names
                .contains(&pod_name)
            {
                kubewarden::reject_request(
                    Some(format!("pod name {:?} is not accepted", pod_name)),
                    None,
                )
            } else {
                kubewarden::accept_request()
            }
        }
        Err(_) => {
            // We were forwarded a request we cannot unmarshal or
            // understand, just accept it
            kubewarden::accept_request()
        }
    }
}

Unit tests

Finally, we will create some unit tests to ensure the validation code works as expected.

The lib.rs file has already some tests defined at the bottom of the file, as you can see Kubewarden's Rust SDK provides some test helpers too.

Moreover, the scaffolded project already ships with some default test fixtures inside of the test_data directory. We are going to take advantage of these recorded admission requests to write our unit tests.

Change the contents of the test section inside of src/lib.rs to look like that:

#[cfg(test)]
mod tests {
    use super::*;

    use kubewarden_policy_sdk::test::Testcase;
    use std::collections::HashSet;

    #[test]
    fn accept_pod_with_valid_name() -> Result<(), ()> {
        let mut invalid_names = HashSet::new();
        invalid_names.insert(String::from("bad_name1"));
        let settings = Settings { invalid_names };

        let request_file = "test_data/pod_creation.json";
        let tc = Testcase {
            name: String::from("Pod creation with valid name"),
            fixture_file: String::from(request_file),
            expected_validation_result: true,
            settings,
        };

        let res = tc.eval(validate).unwrap();
        assert!(
            res.mutated_object.is_none(),
            "Something mutated with test case: {}",
            tc.name,
        );

        Ok(())
    }

    #[test]
    fn reject_pod_with_invalid_name() -> Result<(), ()> {
        let mut invalid_names = HashSet::new();
        invalid_names.insert(String::from("nginx"));
        let settings = Settings { invalid_names };

        let request_file = "test_data/pod_creation.json";
        let tc = Testcase {
            name: String::from("Pod creation with invalid name"),
            fixture_file: String::from(request_file),
            expected_validation_result: false,
            settings,
        };

        let res = tc.eval(validate).unwrap();
        assert!(
            res.mutated_object.is_none(),
            "Something mutated with test case: {}",
            tc.name,
        );

        Ok(())
    }

    #[test]
    fn accept_request_with_non_pod_resource() -> Result<(), ()> {
        let mut invalid_names = HashSet::new();
        invalid_names.insert(String::from("prod"));
        let settings = Settings { invalid_names };

        let request_file = "test_data/ingress_creation.json";
        let tc = Testcase {
            name: String::from("Ingress creation"),
            fixture_file: String::from(request_file),
            expected_validation_result: true,
            settings,
        };

        let res = tc.eval(validate).unwrap();
        assert!(
            res.mutated_object.is_none(),
            "Something mutated with test case: {}",
            tc.name,
        );

        Ok(())
    }
}

We now have three unit tests defined inside of this file:

  • accept_pod_with_valid_name: ensures a Pod with a valid name is accepted
  • reject_pod_with_invalid_name: ensures a Pod with an invalid name is rejected
  • accept_request_with_non_pod_resource: ensure the policy accepts request that do not have a Pod as object

We can run the unit tests again:

$ cargo test
   Compiling demo v0.1.0 (/home/flavio/hacking/kubernetes/kubewarden/demo)
    Finished test [unoptimized + debuginfo] target(s) in 3.45s
     Running target/debug/deps/demo-24670dd6a538fd72

running 5 tests
test settings::tests::accept_settings_with_a_list_of_invalid_names ... ok
test settings::tests::reject_settings_without_a_list_of_invalid_names ... ok
test tests::accept_request_with_non_pod_resource ... ok
test tests::accept_pod_with_valid_name ... ok
test tests::reject_pod_with_invalid_name ... ok

test result: ok. 5 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

That's all if you want to write a simple validating policy.

Creating a new mutation policy

Mutating policies are similar to validating ones, but have also the ability to mutate an incoming object.

They can:

  • Reject a request
  • Accept a request without doing any change to the incoming object
  • Mutate the incoming object as they like and accept the request

Writing a Kubewarden mutation policies is extremely simple. We will use the validating policy created inside of the previous steps and, with very few changes, turn it into a mutating one.

Our policy will use the same validation logic defined before, but it will also add an annotation to all the Pods that have a valid name.

Attempting to create a Pod like that:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
    - name: nginx
      image: nginx:latest

Will lead to the creation of this Pod:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  annotations:
    kubewarden.policy.demo/inspected: true
spec:
  containers:
    - name: nginx
      image: nginx:latest

Write the mutation code

The mutation code is done inside of the validate function. The function should be changed to approve the request via the mutate_request instead of the accept_request.

This is how the validate function has to look like:

fn validate(payload: &[u8]) -> CallResult {
    let validation_request: ValidationRequest<Settings> = ValidationRequest::new(payload)?;

    match serde_json::from_value::<apicore::Pod>(validation_request.request.object) {
        // NOTE 1
        Ok(mut pod) => {
            let pod_name = pod.metadata.name.clone().unwrap_or_default();
            if validation_request
                .settings
                .invalid_names
                .contains(&pod_name)
            {
                kubewarden::reject_request(
                    Some(format!("pod name {:?} is not accepted", pod_name)),
                    None,
                )
            } else {
                // NOTE 2
                let mut new_annotations = pod.metadata.annotations.clone().unwrap_or_default();
                new_annotations.insert(
                    String::from("kubewarden.policy.demo/inspected"),
                    String::from("true"),
                );
                pod.metadata.annotations = Some(new_annotations);

                // NOTE 3
                let mutated_object = serde_json::to_value(pod)?;
                kubewarden::mutate_request(mutated_object)
            }
        }
        Err(_) => {
            // We were forwarded a request we cannot unmarshal or
            // understand, just accept it
            kubewarden::accept_request()
        }
    }
}

Compared to the previous code, we made only three changes:

  1. We defined the pod object as mutable, see the mut keyword. This is needed because we will extend its metadata.annotations attribute
  2. This is the actual code that takes the existing annotations, adds the new one, and finally puts the updated annotations object back into the original pod instance
  3. Serialize the pod object into a generic serde_json::Value and then return a mutation response

Having done these changes, it's time to run the unit tests again:

$ cargo test
   Compiling demo v0.1.0 (/home/flavio/hacking/kubernetes/kubewarden/demo)
    Finished test [unoptimized + debuginfo] target(s) in 4.53s
     Running target/debug/deps/demo-24670dd6a538fd72

running 5 tests
test settings::tests::reject_settings_without_a_list_of_invalid_names ... ok
test settings::tests::accept_settings_with_a_list_of_invalid_names ... ok
test tests::reject_pod_with_invalid_name ... ok
test tests::accept_pod_with_valid_name ... FAILED
test tests::accept_request_with_non_pod_resource ... ok

failures:

---- tests::accept_pod_with_valid_name stdout ----
thread 'tests::accept_pod_with_valid_name' panicked at 'Something mutated with test case: Pod creation with valid name', src/lib.rs:74:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    tests::accept_pod_with_valid_name

test result: FAILED. 4 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

As you can see, the accept_pod_with_valid_name fails because the response actually contains a mutated object. It looks like our code is actually working!

Update the unit tests

Let's update the accept_pod_with_valid_name to look like that:

#[test]
fn accept_pod_with_valid_name() -> Result<(), ()> {
    let mut invalid_names = HashSet::new();
    invalid_names.insert(String::from("bad_name1"));
    let settings = Settings { invalid_names };

    let request_file = "test_data/pod_creation.json";
    let tc = Testcase {
        name: String::from("Pod creation with valid name"),
        fixture_file: String::from(request_file),
        expected_validation_result: true,
        settings,
    };

    let res = tc.eval(validate).unwrap();
    // NOTE 1
    assert!(
        res.mutated_object.is_some(),
        "Expected accepted object to be mutated",
    );

    // NOTE 2
    let final_pod =
        serde_json::from_str::<apicore::Pod>(res.mutated_object.unwrap().as_str()).unwrap();
    let final_annotations = final_pod.metadata.annotations.unwrap();
    assert_eq!(
        final_annotations.get_key_value("kubewarden.policy.demo/inspected"),
        Some((
            &String::from("kubewarden.policy.demo/inspected"),
            &String::from("true")
        )),
    );

    Ok(())
}

Compared to the initial test, we made only two changes:

  1. Change the assert! statement to ensure the request is still accepted, but it also includes a mutated object
  2. Created a Pod instance starting from the mutated object that is part of the response. Assert the mutated Pod object contains the right metadata.annotations.

We can run the tests again, this time all of them will pass:

$ cargo test
   Compiling demo v0.1.0 (/home/flavio/hacking/kubernetes/kubewarden/demo)
    Finished test [unoptimized + debuginfo] target(s) in 2.61s
     Running target/debug/deps/demo-24670dd6a538fd72

running 5 tests
test settings::tests::reject_settings_without_a_list_of_invalid_names ... ok
test settings::tests::accept_settings_with_a_list_of_invalid_names ... ok
test tests::accept_request_with_non_pod_resource ... ok
test tests::reject_pod_with_invalid_name ... ok
test tests::accept_pod_with_valid_name ... ok

test result: ok. 5 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

As you can see the creation of a mutation policy is pretty straightforward.

Logging

You can perform logging in your policy, so the policy-server or kwctl will forward those log entries with the appropriate information.

The logging library chosen for the Rust SDK is slog, as it is a well known crate and integrates in a very straightforward way with Kubewarden.

Initialize logger

We recommend that you create a global sink where you can log from where you need within your policy. For this, we will use the lazy_static crate:


#![allow(unused)]
fn main() {
use slog::{o, Logger};

lazy_static! {
    static ref LOG_DRAIN: Logger = Logger::root(
        logging::KubewardenDrain::new(),
        o!("policy" => "sample-policy")
    );
}
}

Consuming the logger

Now, from within our validate, or validate_settings functions, we are able to log using the macros exported by slog that match each supported logging level:


#![allow(unused)]
fn main() {
use slog::{info, o, warn, Logger};

fn validate(payload: &[u8]) -> CallResult {
    // ...
    info!(LOG_DRAIN, "starting validation");
    // ...
    warn!(
        LOG_DRAIN, "structured log";
        "some_resource_name" => &some_resource_name
    );
    // ...
}
}

The slog library will send all logs to the drain we initialized in the global variable, that will get sinked to the policy evaluator executing the policy, kwctl or the policy-server. Then the policy evaluator will log this information, adding more contextual information it knows about, such as the Kubernetes request uid.

More information about the logging macros offered by slog can be found inside of its documentation.

Building the policy

So far we have built the policy using as a compilation target the same operating system and architecture of our development machine.

It's now time to build the policy as a WebAssembly binary, also known as .wasm file.

This can be done with a simple command:

make build

This command will build the code in release mode, with WebAssembly as compilation target.

The build will produce the following file:

$ file target/wasm32-unknown-unknown/release/demo.wasm
target/wasm32-unknown-unknown/release/demo.wasm: WebAssembly (wasm) binary module version 0x1 (MVP)

Distributing the policy

This topic is covered inside of the distributing policies section of Kubewarden's documentation.

More examples

You can find more Kubewarden policies written in Rust inside of Kubewarden's GitHub space. This query can help you find them.

Worth of note: these repositories have a series of GitHub Actions that automate the following tasks:

  • Run unit tests and code linting on pull requests and after code is merged into the main branch
  • Build the policy in release mode and push it to a OCI registry as an artifact

Rego

Note well: Rego support has been introduced starting from these releases:

  • kwctl: v0.2.0
  • policy-server: v0.2.0

The Rego language is a tailor made language designed to embrace policies as code. Rego is a language inspired by Datalog.

There are two ways of writing Rego policies as of today in order to implement policies as code in Kubernetes: Open Policy Agent and Gatekeeper.

One language. Two frameworks

Open Policy Agent

Open Policy Agent is a project that allows you to implement policies as code in any project. You can rely on Open Policy Agent for any policy based check that you might require in your own application, that will in turn execute the required Rego policies.

In this context, writing policies for Kubernetes is just another way of exercising Open Policy Agent. By using Kubernetes admission webhooks, it's possible to evaluate requests using Open Policy Agent, that will in turn execute the policies written in Rego.

Open Policy Agent has some optional integration with Kubernetes through its kube-mgmt sidecar. When deployed on top of Kubernetes and next to the Open Policy Agent server evaluating the Rego policies, it is able to replicate the configured Kubernetes resources into Rego -- so those Kubernetes resources are visible to all policies. It also lets you define policies inside Kubernetes' ConfigMap objects. You can read more about it on its project page.

Gatekeeper

Gatekeeper is very different from Open Policy Agent in this regard. It is focused exclusively to be used in Kubernetes, and takes advantage of that as much as it can, making some Kubernetes workflows easier than Open Policy Agent in many cases.

Looking at the differences

Both Open Policy Agent and Gatekeeper policies use Rego to describe their policies as code. However, this is only one part of the puzzle. Each solution has differences when it comes to writing real policies in Rego, and we are going to look at those differences in the next sections.

Entry point

The entry point is the name of a rule within a package, and is the rule to be invoked by the runtime when the policy is instantiated.

Current limitations

Context-aware policies

Context-aware policies are policies that don't evaluate the input request in isolation. They take other factors into account in order to take a decision. For example, a policy that evaluates namespaced resources and uses an annotation on the parent namespace to configure something on the policy. Another example would be a policy that evaluates Ingress resources, but that in order to take a decision has the list of the already existing Ingress resources.

The concept of context-aware policies can also extend to custom resources, so your policy might want to evaluate a request based on currently persisted custom resources as well.

Both Open Policy Agent and Gatekeeper support context-aware policies. Right now Kubewarden implements this functionality only for policies written with the Kubewarden SDK. We have plans to fill this gap, to allow Rego policies to be context-aware policies too.

Mutating policies

Gatekeeper has support for mutating policies, but Kubewarden has not yet implemented mutating policies with Gatekeeper compatibility. You can use policies that use the Kubewarden SDK to write mutating policies, but at the time of writing, you cannot run Gatekeeper mutating policies in Kubewarden yet.

Open Policy Agent

Note well: Open Policy Agent support has been introduced starting from these releases:

  • kwctl: v0.2.0
  • policy-server: v0.2.0

Open Policy Agent is a general purpose policy framework that uses the Rego language to write policies.

Introduction

Rego policies work by receiving an input to evaluate, and produce an output as a response. In this sense, Open Policy Agent has no specific tooling for targeting writing policies for Kubernetes.

Specifically, policies in Open Policy Agent receive a JSON input and produce a JSON output. When the Open Policy Agent server is set up to receive admission review requests from Kubernetes, policies will receive a Kubernetes AdmissionReview object in JSON format with the object to evaluate, and they have to produce a valid AdmissionReview object in return with the evaluation results.

Compatibility with existing policies

All policies can be compiled to the wasm target (WebAssembly) with the official opa CLI tool.

In terms of policy execution, you can read more about the Open Policy Agent built-in support that is implemented in Kubewarden.

Create a new policy

Let's create a sample policy that will help us go through some important concepts. Let's start!

Note well: we also provide a GitHub repository template that you can use to quickly port an existing policy.

Check it out: kubewarden/opa-policy-template

Requirements

We will write, compile and execute the policy on this section. You need some tools in order to complete this tutorial:

  • opa: we will use the opa CLI to build our policy to a wasm target.

  • kwctl: we will use kwctl to execute our built policy.

The policy

We are going to create a policy that evaluates any kind of namespaced resource. Its goal is to forbid the creation of any resource if the target namespace is default. Otherwise, the request will be accepted. Let's start by creating a folder called opa-policy.

We are going to create a folder named data inside of the opa-policy folder. This folder will contain the recorded AdmissionReview objects from the Kubernetes API server. I reduced them greatly for the sake of simplicity for the exercise, so we can focus on the bits that matter.

Let us first create a default-ns.json file with the following contents inside the data directory:

{
  "apiVersion": "admission.k8s.io/v1",
  "kind": "AdmissionReview",
  "request": {
    "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
    "operation": "CREATE",
    "object": {
      "kind": "Pod",
      "apiVersion": "v1",
      "metadata": {
        "name": "nginx",
        "namespace": "default",
        "uid": "04dc7a5e-e1f1-4e34-8d65-2c9337a43e64"
      }
    }
  }
}

This simulates a pod operation creation inside the default namespace. Now, let's create another request example in other-ns.json inside the data directory:

{
  "apiVersion": "admission.k8s.io/v1",
  "kind": "AdmissionReview",
  "request": {
    "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
    "operation": "CREATE",
    "object": {
      "kind": "Pod",
      "apiVersion": "v1",
      "metadata": {
        "name": "nginx",
        "namespace": "other",
        "uid": "04dc7a5e-e1f1-4e34-8d65-2c9337a43e64"
      }
    }
  }
}

As you can see, this simulates another pod creation request, this time under a namespace called other.

Let's go back to our opa-policy folder and start writing our Rego policy.

Inside this folder, we create a file named request.rego inside the opa-policy folder. The name can be anything, but we'll use that one for this exercise. As the name suggests, this is a Rego file that has some utility code regarding the request/response itself: in particular, it allows us to simplify our policy code itself and reuse this common bit across different policies if desired. The contents are:

package policy

import data.kubernetes.admission

main = {
	"apiVersion": "admission.k8s.io/v1",
	"kind": "AdmissionReview",
	"response": response,
}

response = {
	"uid": input.request.uid,
	"allowed": false,
	"status": {"message": reason},
} {
	reason = concat(", ", admission.deny)
	reason != ""
} else = {
	"uid": input.request.uid,
	"allowed": true,
} {
	true
}

We will not go too deep into the Rego code itself. You can learn about it in its website.

Suffice to say that in this case, it will return either allowed: true or allowed: false depending on whether other package (data.kubernetes.admission) has any deny statement that evaluates to true.

If any data.kubernetes.admission.deny evaluates to true, the response here will evaluate to the first block. Otherwise, it will evaluate to the second block -- leading to acceptance, because no deny block evaluated to true, this means we are accepting the request.

Now, this is just the shell of the policy, the utility. Now, we create another file, called, for example policy.rego inside our opa-policy folder with the following contents:

package kubernetes.admission

deny[msg] {
	input.request.object.metadata.namespace == "default"
	msg := "it is forbidden to use the default namespace"
}

This is our policy. The important part. deny will evaluate to true if all statements within it evaluate to true. In this case, is only one statement: checking if the namespace is default.

By Open Policy Agent design, input contains the queriable object with the AdmissionReview object, so we can inspect it quite easily.

If everything went well, our tree should look like the following:

.
├── data
│   ├── default-ns.json
│   └── other-ns.json
├── policy.rego
└── request.rego

1 directory, 4 files

Build and run

In the previous section we have written our Rego policy. The structure looks as the following:

.
├── data
│   ├── default-ns.json
│   └── other-ns.json
├── policy.rego
└── request.rego

1 directory, 4 files

Build

We have our policy, now let's go ahead and build it. We do:

$ opa build -t wasm -e policy/main policy.rego request.rego

What this does is build the rego policy, with:

  • target: wasm. We want to build the policy for the wasm target.
  • entrypoint: policy/main. The entry point is the main rule inside the policy package.
  • policy.rego: build and include the policy.rego file.
  • request.rego: build and include the request.rego file.

After the build is complete, opa build will have generated a bundle.tar.gz file. You can extract it:

$ tar -xf bundle.tar.gz /policy.wasm

Now the tree looks like the following:

.
├── bundle.tar.gz
├── data
│   ├── default-ns.json
│   └── other-ns.json
├── policy.rego
├── policy.wasm
└── request.rego

1 directory, 6 file

We have our precious policy.wasm file:

$ file policy.wasm
policy.wasm: WebAssembly (wasm) binary module version 0x1 (MVP)

Now it's time to execute it! Let's go on.

Run

We are going to use kwctl in order to run the policy:

$ kwctl run -e opa --request-path data/other-ns.json policy.wasm | jq
{
  "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
  "allowed": true
}

This request is accepted by the policy, since this is the request pointing to the other namespace.

  • execution-mode: opa. Rego policies can be targeting Open Policy Agent or Gatekeeper: we must tell kwctl what kind of policy we are running.

  • request-path: the location of the recorded request kwctl will send to the policy to evaluate.

Now let's try to evaluate the request that creates the pod inside the default namespace:

$ kwctl run -e opa --request-path data/default-ns.json policy.wasm | jq
{
  "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
  "allowed": false,
  "status": {
    "message": "it is forbidden to use the default namespace"
  }
}

In this case, the policy is rejecting the request, and giving a reason back to the API server that will be returned to the user or API consumer.

Distribute

We have written, built and run our Rego policy. Now it's time to distribute the policy.

Policies have to be annotated in order for them to be executed in the policy-server, the component that executes the policies when running in a Kubernetes cluster.

Annotating the policy

Annotating a policy is a process that enriches the policy metadata with relevant information like authorship, license, source code location and other important metadata such as rules, that describes what kind of resources this policy can understand and evaluate.

In order to annotate our policy let's write a simple metadata.yaml file:

rules:
- apiGroups: [""]
  apiVersions: ["*"]
  resources: ["*"]
  operations: ["CREATE"]
mutating: false
contextAware: false
executionMode: opa
annotations:
  io.kubewarden.policy.title: no-default-namespace
  io.kubewarden.policy.description: This policy will reject any resource created inside the default namespace
  io.kubewarden.policy.author: The Kubewarden Authors
  io.kubewarden.policy.url: https://github.com/kubewarden/some-policy
  io.kubewarden.policy.source: https://github.com/kubewarden/some-policy
  io.kubewarden.policy.license: Apache-2.0
  io.kubewarden.policy.usage: |
      This policy is just an example.

      You can write interesting descriptions about the policy here.

In this case, you can see several details:

  • Rules: what resources this policy is targeting
  • Mutating: whether this policy is mutating. In this case, is just validating.
  • Context aware: whether this policy requires context from the cluster in order to evaluate the request.
  • Execution mode: since this is a Rego policy it is mandatory to specify what execution mode it expects: opa or gatekeeper. This policy is written in the opa style: returning a whole AdmissionReview object.
  • Annotations: metadata stored into the policy itself.

Let's go ahead and annotate our policy:

$ kwctl annotate policy.wasm --metadata-path metadata.yaml --output-path annotated-policy.wasm

Now you can inspect the policy if you will by running kwctl inspect annotated-policy.wasm.

Pushing the policy

Now that the policy is annotated we can push it to an OCI registry. Let's do that:

$ kwctl push annotated-policy.wasm registry.my-company.com/kubewarden/no-default-namespace:v0.0.1
Policy successfully pushed

Now our Rego policy targeting the OPA framework has everything it needs to be deployed in production by creating a ClusterAdmissionPolicy. Let's prepare that too. First, we have to pull the policy into the kwctl local store:

$ kwctl pull registry://registry.my-company.com/kubewarden/no-default-namespace:v0.0.1
pulling policy...

Let's create a ClusterAdmissionPolicy out of it. This operation will take into account the metadata it has about the policy:

$ kwctl manifest registry://registry.my-company.com/kubewarden/no-default-namespace:v0.0.1 --type ClusterAdmissionPolicy
---
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
  name: generated-policy
spec:
  module: "registry://registry.my-company.com/kubewarden/no-default-namespace:v0.0.1"
  settings: {}
  rules:
    - apiGroups:
        - ""
      apiVersions:
        - "*"
      resources:
        - "*"
      operations:
        - CREATE
  mutating: false

You can now use this ClusterAdmissionPolicy as a base to target the resources that you want, or deploy to Kubernetes as is.

Gatekeeper

Note well: Gatekeeper support has been introduced starting from these releases:

  • kwctl: v0.2.0
  • policy-server: v0.2.0

Gatekeeper is a project targeting Kubernetes, and as such, has some features that are thought out of the box for being integrated with it.

Compatibility with existing policies

All the existing Gatekeeper policies should be compatible with Kubewarden as we will explain during this chapter.

Note: if this is not the case, please report it to us and we will do our best to make sure your policy runs flawlessly with Kubewarden.

Policies have to be compiled with the opa CLI to the wasm target.

In terms of policy execution, you can read more about the Open Policy Agent built-in support that is implemented in Kubewarden.

Create a new policy

Let's implement the same policy that we wrote with Open Policy Agent: a policy that rejects a resource if it's targeting the default namespace.

Note well: we also provide a GitHub repository template that you can use to quickly port an existing policy.

Check it out: kubewarden/gatekeeper-policy-template

Requirements

As in the previous section, we will require the following tools:

  • opa
  • kwctl

The policy

Gatekeeper policies must return none or more violation objects. If no violations are reported, the request will be accepted. If one, or more violations are reported, the request will be rejected.

We create a new folder, named rego-policy. Inside of it, we create a policy.rego file with contents:

package policy

violation[{"msg": msg}] {
        input.review.object.metadata.namespace == "default"
        msg := "it is forbidden to use the default namespace"
}

In this case, our entrypoint is policy/violation, and because of how Rego works, the policy can have the following outcomes:

  • return 1 violation: the object being reviewed is targeting the default namespace.

  • return 0 violations: the object being reviewed is compliant with the policy.

Take a moment to compare this policy with the one we wrote in the Open Policy Agent section. That one had to build the whole AdmissionReview response, and the inputs were slightly different. In the Gatekeeper mode, the AdmissionRequest object is provided at the input.review attribute. All attributes of the AdmissionRequest are readable along with object.

Now, let's create the requests that we are going to evaluate in the next section.

Let us first create a default-ns.json file with the following contents inside the data directory:

{
  "apiVersion": "admission.k8s.io/v1",
  "kind": "AdmissionReview",
  "request": {
    "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
    "operation": "CREATE",
    "object": {
      "kind": "Pod",
      "apiVersion": "v1",
      "metadata": {
        "name": "nginx",
        "namespace": "default",
        "uid": "04dc7a5e-e1f1-4e34-8d65-2c9337a43e64"
      }
    }
  }
}

Now, let's create another AdmissionReview object that this time is targeting a namespace different than the default one. Let us name this file other-ns.json. It has the following contents:

{
  "apiVersion": "admission.k8s.io/v1",
  "kind": "AdmissionReview",
  "request": {
    "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
    "operation": "CREATE",
    "object": {
      "kind": "Pod",
      "apiVersion": "v1",
      "metadata": {
        "name": "nginx",
        "namespace": "other",
        "uid": "04dc7a5e-e1f1-4e34-8d65-2c9337a43e64"
      }
    }
  }
}

As you can see, this simulates another pod creation request, this time under a namespace called other.

Build and run

Building and running the policy is done exactly the same way as a Rego policy targeting Open Policy Agent. The structure of our project is like:

.
├── data
│   ├── default-ns.json
│   └── other-ns.json
└── policy.rego

1 directory, 3 files

Build

Let's build our policy by running the following opa command:

$ opa build -t wasm -e policy/violation policy.rego

What this does is build the rego policy, with:

  • target: wasm. We want to build the policy for the wasm target.
  • entrypoint: policy/violation. The entry point is the violation rule inside the policy package.
  • policy.rego: build and include the policy.rego file.

The previous command generates a bundle.tar.gz file. You can extract the wasm module from it:

$ tar -xf bundle.tar.gz /policy.wasm

The project tree looks like the following:

.
├── bundle.tar.gz
├── data
│   ├── default-ns.json
│   └── other-ns.json
├── policy.rego
└── policy.wasm

1 directory, 5 files

We can now execute our policy!

Run

Let's use kwctl to run our policy as follows:

$ kwctl run -e gatekeeper --request-path data/other-ns.json policy.wasm | jq
{
  "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
  "allowed": true
}

Given that this is our resource created in the namespace called other, this resource is accepted, as expected.

Now let's execute a request that will be rejected by the policy:

$ kwctl run -e gatekeeper --request-path data/default-ns.json policy.wasm | jq
{
  "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
  "allowed": false,
  "status": {
    "message": "it is forbidden to use the default namespace"
  }
}

As you can see, our Gatekeeper policy rejected this resource as expected.

Distribute

Policies have to be annotated for them to be pushed, and eventually executed by the Kubewarden policy-server in a Kubernetes cluster.

Annotating and distributing our Gatekeeper policy is very similar to distributing an Open Policy Agent one. Let's go through it.

Annotating the policy

We are going to write a metadata.yaml file in our policy directory with contents:

rules:
- apiGroups: [""]
  apiVersions: ["*"]
  resources: ["*"]
  operations: ["CREATE"]
mutating: false
contextAware: false
executionMode: gatekeeper
annotations:
  io.kubewarden.policy.title: no-default-namespace
  io.kubewarden.policy.description: This policy will reject any resource created inside the default namespace
  io.kubewarden.policy.author: The Kubewarden Authors
  io.kubewarden.policy.url: https://github.com/kubewarden/some-policy
  io.kubewarden.policy.source: https://github.com/kubewarden/some-policy
  io.kubewarden.policy.license: Apache-2.0
  io.kubewarden.policy.usage: |
      This policy is just an example.

      You can write interesting descriptions about the policy here.

As you can see, everything is the same as the Open Policy Agent version metadata, except for the executionMode: gatekeeper bit.

Let's go ahead and annotate the policy:

$ kwctl annotate policy.wasm --metadata-path metadata.yaml --output-path annotated-policy.wasm

Pushing the policy

Let's push our policy to an OCI registry:

$ kwctl push annotated-policy.wasm registry.my-company.com/kubewarden/no-default-namespace-gatekeeper:v0.0.1
Policy successfully pushed

Deploying on Kubernetes

We have to pull our policy to our kwctl local store first:

$ kwctl pull registry://registry.my-company.com/kubewarden/no-default-namespace-gatekeeper:v0.0.1
pulling policy...

We can now create a scaffold ClusterAdmissionPolicy resource:

$ kwctl manifest registry://registry.my-company.com/kubewarden/no-default-namespace-gatekeeper:v0.0.1 --type ClusterAdmissionPolicy
---
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
  name: generated-policy
spec:
  module: "registry://registry.my-company.com/kubewarden/no-default-namespace-gatekeeper:v0.0.1"
  settings: {}
  rules:
    - apiGroups:
        - ""
      apiVersions:
        - "*"
      resources:
        - "*"
      operations:
        - CREATE
  mutating: false

We could now use this ClusterAdmissionPolicy resource to deploy our policy to a Kubernetes cluster.

Builtin support

Building a policy for the wasm target is only half of the problem, it needs to be executed.

The Open Policy Agent team has a dedicated page you can check in order to find out the built-in support level.

When building a Rego policy into a WebAssembly module, some of these built-in functions are going to be implemented inside of the Wasm file itself (the built-ins marked with a green check in the previously linked table) -- regardless of the runtime; while others have to be provided at execution time by the WebAssembly runtime evaluating the module.

The built-ins marked as SDK-dependent are the ones that the host has to implement -- in this case, Kubewarden. Open Policy Agent and Gatekeeper may use them depending on the needs of the policy. In any case, this built-ins are exposed to the policy and any new or existing policy could depend on them.

There are still some built-ins that are not yet provided by us, however, based on the policies we have seen in the open, the ones we already support should be enough for the majority of Kubernetes users.

This GitHub issue keeps track of the Rego built-ins we have still to implement. Feel free to comment over there to prioritize our work.

Executing policies with missing built-ins

When a policy is instantiated with kwctl or with policy-server, the list of built-ins used by the policy will be inspected, and if any of the used built-ins is missing, the program will abort execution logging a fatal error reporting what are the missing built-ins.

Note well: Go's support for WebAssembly is fast evolving. The contents of this page have been written during April 2021, hence they could be outdated.

Go

Currently the official Go compiler cannot produce WebAssembly binaries that can be run outside of the browser. This upstream issue is tracking the evolution of this topic. Due to that, it's not possible to use the Go compiler to write Kubewarden policies.

Luckily there's another Go compiler that is capable of building WebAssembly binaries that can be used by Kubewarden. This compiler is called TinyGo:

TinyGo is a project to bring the Go programming language to microcontrollers and modern web browsers by creating a new compiler based on LLVM.

You can compile and run TinyGo programs on many different microcontroller boards such as the BBC micro:bit and the Arduino Uno.

TinyGo can also be used to produce WebAssembly (Wasm) code which is very compact in size.

Limitations

TinyGo doesn't yet support all the Go features (see here to see the current project status). Currently its biggest limitation is the lack of a fully supported reflect package. That leads to the inability to use the encoding/json package against structures and user defined types.

Kubewarden policies need to process JSON data like the policy settings and the actual request received by Kubernetes.

Despite TinyGo's current limitations, it's still easy and doable to write Kubewarden validation policies with it.

Note well: unfortunately, it's currently impossible to write mutating policies using TinyGo.

Tooling

Writing Kubewarden policies requires a version of TinyGo greater than 0.17.0.

These Go libraries are extremely useful when writing a Kubewarden policy:

  • Kubewarden Go SDK: provides a series of structures and functions that reduce the amount of code to write. It also provides test helpers.
  • gjson: provides a powerful query language that allows quick navigation of JSON documents and data retrieval. This library doesn't use the encoding/json package provided by Go's stdlib, hence it's usable with TinyGo.
  • mapset: provides a Go implementation of the Set data structure. This library significantly reduces the amount of code to be written, that's because operations like Set union, intersection, difference are pretty frequent inside of policies.

Last but not least, the Kubewarden project provides a template Go policy project that can be used to quickly create Kubewarden policies written in Go.

Getting TinyGo dependencies

The easiest way to get TinyGo is by using the upstream container images. Official releases can be found here, while builds from the development branch are automatically pushed here.

If needed, checkout TinyGo's getting started page for more information.

Note well: Kubewarden's requires code that is available only on the development branch. This will be solved once TinyGo 0.17.0 is released. In the meantime we will use the container image based on the development branch: tinygo/tinygo-dev:latest.

Creating a new validation policy

We are going to create a validation policy that validates the labels of generic Kubernetes objects.

The policy will reject all the resources that use one or more labels on the deny list. The policy will also validate certain labels using a regular expression provided by the user.

To summarize, the policy settings will look like that:

# List of labels that cannot be used
denied_labels:
- foo
- bar

# Labels that are validated with user-defined regular expressions
constrained_labels:
  priority: "[123]"
  cost-center: "^cc-\d+"

The policy would reject the creation of this Pod:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    foo: hello world
spec:
  containers:
    - name: nginx
      image: nginx:latest

The policy would also reject the creation of this Pod:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    cost-center: cc-marketing
spec:
  containers:
    - name: nginx
      image: nginx:latest

Policy's settings can also be used to force certain labels to be specified, regardless of their contents:

# Policy's settings

constrained_labels:
  mandatory-label: ".*" # <- this label must be present, we don't care about its value

Scaffolding new policy project

The creation of a new policy project can be done by using this GitHub template repository: kubewarden/go-policy-template. Just press the "Use this template" green button near the top of the page and follow GitHub's wizard.

Clone the repository locally and then ensure the module directive inside of the go.mod file looks like that:

module <path to your repository>

Defining policy settings

As a first step we will define the structure that holds the policy settings.

We will do that by adding this code inside of the settings.go file:

import (
	"github.com/deckarep/golang-set"
	"github.com/kubewarden/gjson"
	kubewarden "github.com/kubewarden/policy-sdk-go"

	"fmt"
	"regexp"
)

type Settings struct {
	DeniedLabels      mapset.Set                    `json:"denied_labels"`
	ConstrainedLabels map[string]*RegularExpression `json:"constrained_labels"`
}

As you can see we're using the regexp package to handle regular expression objects, plus we use the mapset.Set structure to store the list of denied labels.

The Settings struct has json attributes, we will use them later when writing our unit tests. The unit tests are going to be executed using Go official compiler, hence we will be able to leverage the encoding/json package.

The Settings class is not using the official regexp.RegExp object to represent regular expressions. That's because the regexp.RegExp struct doesn't handle serialization and deserialization to JSON.

This is the implementation of the RegularExpression struct:

// A wrapper around the standard regexp.Regexp struct
// that implements marshalling and unmarshalling
type RegularExpression struct {
	*regexp.Regexp
}

// Convenience method to build a regular expression
func CompileRegularExpression(expr string) (*RegularExpression, error) {
	nativeRegExp, err := regexp.Compile(expr)
	if err != nil {
		return nil, err
	}
	return &RegularExpression{nativeRegExp}, nil
}

// UnmarshalText satisfies the encoding.TextMarshaler interface,
// also used by json.Unmarshal.
func (r *RegularExpression) UnmarshalText(text []byte) error {
	nativeRegExp, err := regexp.Compile(string(text))
	if err != nil {
		return err
	}
	r.Regexp = nativeRegExp
	return nil
}

// MarshalText satisfies the encoding.TextMarshaler interface,
// also used by json.Marshal.
func (r *RegularExpression) MarshalText() ([]byte, error) {
	if r.Regexp != nil {
		return []byte(r.Regexp.String()), nil
	}

	return nil, nil
}

Building Settings instances

At runtime we can't rely on the automatic struct marshalling and unmarshalling provided by the encoding/json package due to TinyGo current limitations. Because of that we will create two initialization helpers:

  • NewSettingsFromValidationReq: this is used when building a Settings instance starting from a ValidationRequest object
  • NewSettingsFromValidateSettingsPayload: this is used when building a Settings instance inside of the validate_settings entry point. This entry point receives the "naked" Settings JSON dictionary

This is the implementation of these functions:

// Builds a new Settings instance starting from a validation
// request payload:
// {
//    "request": ...,
//    "settings": {
//       "denied_labels": [...],
//       "constrained_labels": { ... }
//    }
// }
func NewSettingsFromValidationReq(payload []byte) (Settings, error) {
	// Note well: we don't validate the input JSON now, this has
	// already done inside of the `validate` function

	return newSettings(
		payload,
		"settings.denied_labels",
		"settings.constrained_labels")
}

// Builds a new Settings instance starting from a Settings
// payload:
// {
//    "denied_names": [ ... ],
//    "constrained_labels": { ... }
// }
func NewSettingsFromValidateSettingsPayload(payload []byte) (Settings, error) {
	if !gjson.ValidBytes(payload) {
		return Settings{}, fmt.Errorf("denied JSON payload")
	}

	return newSettings(
		payload,
		"denied_labels",
		"constrained_labels")
}

The heavy lifting of the setting is done inside of the newSettings function, which is invoked by both NewSettingsFromValidateSettingsPayload and NewSettingsFromValidationReq.

The function takes the raw JSON payload and a list of gjson queries. These queries are used to extract the values from the JSON data and build the actual object:

func newSettings(payload []byte, paths ...string) (Settings, error) {
	if len(paths) != 2 {
		return Settings{}, fmt.Errorf("wrong number of json paths")
	}

	data := gjson.GetManyBytes(payload, paths...)

	deniedLabels := mapset.NewThreadUnsafeSet()
	data[0].ForEach(func(_, entry gjson.Result) bool {
		deniedLabels.Add(entry.String())
		return true
	})

	constrainedLabels := make(map[string]*RegularExpression)
	var err error
	data[1].ForEach(func(key, value gjson.Result) bool {
		var regExp *RegularExpression
		regExp, err = CompileRegularExpression(value.String())
		if err != nil {
			return false
		}

		constrainedLabels[key.String()] = regExp
		return true
	})
	if err != nil {
		return Settings{}, err
	}

	return Settings{
		DeniedLabels:      deniedLabels,
		ConstrainedLabels: constrainedLabels,
	}, nil
}

As you can see the code above is pretty straightforward. The gjson package provides a convenient method to fetch multiple values from the JSON data.

The newSettings function also creates instances of regexp.Regexp objects and ensures the regular expressions provided by the user are correct.

Note well: all the mapset.Set objects are deliberately created using their thread-unsafe variant. The WebAssembly code is executed in single thread, hence there are no concurrency issues.

Moreover, the WebAssembly standard doesn't cover threads yet. See the official proposal for more details.

Implementing Settings validation

All Kubewarden policies have to implement settings validation.

This can be easily done by adding a Valid method to the Settings instances:

func (s *Settings) Valid() (bool, error) {
	constrainedLabels := mapset.NewThreadUnsafeSet()

	for label := range s.ConstrainedLabels {
		constrainedLabels.Add(label)
	}

	constrainedAndDenied := constrainedLabels.Intersect(s.DeniedLabels)
	if constrainedAndDenied.Cardinality() != 0 {
		return false,
			fmt.Errorf("These labels cannot be constrained and denied at the same time: %v", constrainedAndDenied)
	}

	return true, nil
}

The Valid method ensures no "denied" label is also part of the "constrained" map. The check is simplified by the usage of the Intersect method provided by mapset.Set.

Note well: the Valid method is invoked against an already instantiated Setting object. That means the validation of the regular expression provided by the user already took place at inside of the Settings constructor.

Finally, we have to ensure the validateSettings function that was automatically generated is changed to look like that:

func validateSettings(payload []byte) ([]byte, error) {
	settings, err := NewSettingsFromValidateSettingsPayload(payload)
	if err != nil {
		// this happens when one of the user-defined regular expressions are invalid
		return kubewarden.RejectSettings(
			kubewarden.Message(fmt.Sprintf("Provided settings are not valid: %v", err)))
	}

	valid, err := settings.Valid()
	if valid {
		return kubewarden.AcceptSettings()
	}
	return kubewarden.RejectSettings(
		kubewarden.Message(fmt.Sprintf("Provided settings are not valid: %v", err)))
}

As you can see, the function takes advantage of the helper functions provided by Kubewarden's SDK.

Testing the settings code

As always, it's important to have good test coverage of the code we write. The code we generated comes with a series of unit test defined inside of the settings_test.go file.

We will have to change the contents of this file to reflect the new behaviour of the Settings class.

We will start by including the Go packages we will use:

import (
	"encoding/json"
	"testing"

	kubewarden_testing "github.com/kubewarden/policy-sdk-go/testing"
)

As stated before, the unit tests are not part of the final WebAssembly binary, hence we can build them using the official Go compiler. That means we can use the encoding/json package to simplify our tests.

We will start by writing a unit test that ensures we can allocate a Settings instance from a ValidationRequest object:

func TestParseValidSettings(t *testing.T) {
	request := `
	{
		"request": "doesn't matter here",
		"settings": {
			"denied_labels": [ "foo", "bar" ],
			"constrained_labels": {
				"cost-center": "cc-\\d+"
			}
		}
	}
	`
	rawRequest := []byte(request)

	settings, err := NewSettingsFromValidationReq(rawRequest)
	if err != nil {
		t.Errorf("Unexpected error %+v", err)
	}

	expected_denied_labels := []string{"foo", "bar"}
	for _, exp := range expected_denied_labels {
		if !settings.DeniedLabels.Contains(exp) {
			t.Errorf("Missing value %s", exp)
		}
	}

	re, found := settings.ConstrainedLabels["cost-center"]
	if !found {
		t.Error("Didn't find the expected constrained label")
	}

	expected_regexp := `cc-\d+`
	if re.String() != expected_regexp {
		t.Errorf("Expected regexp to be %v - got %v instead",
			expected_regexp, re.String())
	}
}

Next we will define a test that ensures a Settings instance cannot be generated when the user provides a broken regular expression:

func TestParseSettingsWithInvalidRegexp(t *testing.T) {
	request := `
	{
		"request": "doesn't matter here",
		"settings": {
			"denied_labels": [ "foo", "bar" ],
			"constrained_labels": {
				"cost-center": "cc-[a+"
			}
		}
	}
	`
	rawRequest := []byte(request)

	_, err := NewSettingsFromValidationReq(rawRequest)
	if err == nil {
		t.Errorf("Didn'g get expected error")
	}
}

Next we will define a test that checks the behaviour of the validate_settings entry-point.

In this case we actually look at the SettingsValidationResponse object returned by our validateSettings function:

func TestDetectValidSettings(t *testing.T) {
	request := `
	{
		"denied_labels": [ "foo", "bar" ],
		"constrained_labels": {
			"cost-center": "cc-\\d+"
		}
	}
	`
	rawRequest := []byte(request)
	responsePayload, err := validateSettings(rawRequest)
	if err != nil {
		t.Errorf("Unexpected error %+v", err)
	}

	var response kubewarden_testing.SettingsValidationResponse
	if err := json.Unmarshal(responsePayload, &response); err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	if !response.Valid {
		t.Errorf("Expected settings to be valid: %s", response.Message)
	}
}

Finally, we write two more tests to ensure the validateSettings function rejects invalid settings with the right messages:

func TestDetectNotValidSettingsDueToBrokenRegexp(t *testing.T) {
	request := `
	{
		"denied_labels": [ "foo", "bar" ],
		"constrained_labels": {
			"cost-center": "cc-[a+"
		}
	}
	`
	rawRequest := []byte(request)
	responsePayload, err := validateSettings(rawRequest)
	if err != nil {
		t.Errorf("Unexpected error %+v", err)
	}

	var response kubewarden_testing.SettingsValidationResponse
	if err := json.Unmarshal(responsePayload, &response); err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	if response.Valid {
		t.Error("Expected settings to not be valid")
	}

	if response.Message != "Provided settings are not valid: error parsing regexp: missing closing ]: `[a+`" {
		t.Errorf("Unexpected validation error message: %s", response.Message)
	}
}

func TestDetectNotValidSettingsDueToConflictingLabels(t *testing.T) {
	request := `
	{
		"denied_labels": [ "foo", "bar", "cost-center" ],
		"constrained_labels": {
			"cost-center": ".*"
		}
	}
	`
	rawRequest := []byte(request)
	responsePayload, err := validateSettings(rawRequest)
	if err != nil {
		t.Errorf("Unexpected error %+v", err)
	}

	var response kubewarden_testing.SettingsValidationResponse
	if err := json.Unmarshal(responsePayload, &response); err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	if response.Valid {
		t.Error("Expected settings to not be valid")
	}

	if response.Message != "Provided settings are not valid: These labels cannot be constrained and denied at the same time: Set{cost-center}" {
		t.Errorf("Unexpected validation error message: %s", response.Message)
	}
}

Now we can run the test by using the following command:

go test -v settings.go settings_test.go

All the tests will pass with the following output:

=== RUN   TestParseValidSettings
--- PASS: TestParseValidSettings (0.00s)
=== RUN   TestParseSettingsWithInvalidRegexp
--- PASS: TestParseSettingsWithInvalidRegexp (0.00s)
=== RUN   TestDetectValidSettings
--- PASS: TestDetectValidSettings (0.00s)
=== RUN   TestDetectNotValidSettingsDueToBrokenRegexp
--- PASS: TestDetectNotValidSettingsDueToBrokenRegexp (0.00s)
=== RUN   TestDetectNotValidSettingsDueToConflictingLabels
--- PASS: TestDetectNotValidSettingsDueToConflictingLabels (0.00s)
PASS
ok  	command-line-arguments	0.001s

We can now move to implement the actual validation code.

Writing the validation logic

It's now time to write the actual validation logic. This is done inside of the validate.go file.

The scaffolded policy has already a validate function, we will need to make very few changes to it.

This is how the function has to look like:

func validate(payload []byte) ([]byte, error) {
	// NOTE 1
	if !gjson.ValidBytes(payload) {
		return kubewarden.RejectRequest(
			kubewarden.Message("Not a valid JSON document"),
			kubewarden.Code(400))
	}

	// NOTE 2
	settings, err := NewSettingsFromValidationReq(payload)
	if err != nil {
		return kubewarden.RejectRequest(
			kubewarden.Message(err.Error()),
			kubewarden.Code(400))
	}

	// NOTE 3
	data := gjson.GetBytes(
		payload,
		"request.object.metadata.labels")

	// NOTE 4
	data.ForEach(func(key, value gjson.Result) bool {
		label := key.String()

		// NOTE 5
		if settings.DeniedLabels.Contains(label) {
			err = fmt.Errorf("Label %s is on the deny list", label)
			// stop iterating over labels
			return false
		}

		// NOTE 6
		regExp, found := settings.ConstrainedLabels[label]
		if found {
			// This is a constrained label
			if !regExp.Match([]byte(value.String())) {
				err = fmt.Errorf("The value of %s doesn't pass user-defined constraint", label)
				// stop iterating over labels
				return false
			}
		}

		return true
	})

	// NOTE 7
	if err != nil {
		return kubewarden.RejectRequest(
			kubewarden.Message(err.Error()),
			kubewarden.NoCode)
	}

	return kubewarden.AcceptRequest()
}

The code has some NOTE sections inside of it. Let's get through them:

  1. The function ensures the JSON payload is properly formatted. This is done using a function provided by the gjson library
  2. The Settings instance is created using one of the constructor methods we defined inside of settings.go
  3. We use a gjson selector to get the label map provided by the object embedded into the request
  4. We use a gjson helper to iterate over the results of the query. If the query has no results the loop will never take place.
  5. We look for the label of the object inside of the list of denied labels provided by the user via the policy settings. If the label is a denied one, we set the value of the err variable and exit from the loop (that happens by returning false instead of true).
  6. We look for the label of the object inside of the list of constrained labels provided by the user via the policy settings. When we have a match we use the regular expression provided by the user to validate the value of the label. If the validation fails, we set the value of the err variable and exit from the loop (that happens by returning false instead of true).
  7. If the err variable is not nil, we use the helper provided by Kubewarden's SDK to reject the request. Otherwise we accept it.

Testing the validation code

It's now time to write some unit tests to ensure the validation code is behaving properly. These tests are going to be located inside of the validate_test.go file.

The tests will rely on some test fixtures located inside of the test_data directory. This directory has already been populated by the template repository with an example admissionreview.request object that matches this tutorial, but you would need to craft one with the correct Kubernetes object you are writing the policy for, as we covered in "validating policies"

We will start by including the following packages:

import (
	"encoding/json"
	"testing"

	"github.com/deckarep/golang-set"
	kubewarden_testing "github.com/kubewarden/policy-sdk-go/testing"
)

The first unit test will ensure that having no user settings leads to the request to be accepted:

func TestEmptySettingsLeadsToRequestAccepted(t *testing.T) {
	settings := Settings{}

	payload, err := kubewarden_testing.BuildValidationRequest(
		"test_data/ingress.json",
		&settings)
	if err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	responsePayload, err := validate(payload)
	if err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	var response kubewarden_testing.ValidationResponse
	if err := json.Unmarshal(responsePayload, &response); err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	if response.Accepted != true {
		t.Error("Unexpected rejection")
	}
}

As you can see we are using some test helper functions and structures provided by the Kubewarden SDK.

The next test ensures a request can be accepted when none of its labels is relevant to the user:

func TestRequestAccepted(t *testing.T) {
	constrainedLabels := make(map[string]*RegularExpression)
	re, err := CompileRegularExpression(`^world-`)
	if err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}
	constrainedLabels["hello"] = re

	settings := Settings{
		DeniedLabels:      mapset.NewThreadUnsafeSetFromSlice([]interface{}{"bad1", "bad2"}),
		ConstrainedLabels: constrainedLabels,
	}

	payload, err := kubewarden_testing.BuildValidationRequest(
		"test_data/ingress.json",
		&settings)
	if err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	responsePayload, err := validate(payload)
	if err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	var response kubewarden_testing.ValidationResponse
	if err := json.Unmarshal(responsePayload, &response); err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	if response.Accepted != true {
		t.Error("Unexpected rejection")
	}
}

Next we will ensure a request is accepted when one of its labels satisfies the constraint provided by the user:

func TestAcceptRequestWithConstraintLabel(t *testing.T) {
	constrainedLabels := make(map[string]*RegularExpression)
	re, err := CompileRegularExpression(`^team-`)
	if err != nil {
		t.Errorf("Unexpected error: %s", err)
	}
	constrainedLabels["owner"] = re
	settings := Settings{
		DeniedLabels:      mapset.NewThreadUnsafeSetFromSlice([]interface{}{"bad1", "bad2"}),
		ConstrainedLabels: constrainedLabels,
	}

	payload, err := kubewarden_testing.BuildValidationRequest(
		"test_data/ingress.json",
		&settings)
	if err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	responsePayload, err := validate(payload)
	if err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	var response kubewarden_testing.ValidationResponse
	if err := json.Unmarshal(responsePayload, &response); err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	if response.Accepted != true {
		t.Error("Unexpected rejection")
	}
}

It's now time to test the rejection of requests.

This test verifies a request is rejected when one of the labels is on the deny list:

func TestRejectionBecauseDeniedLabel(t *testing.T) {
	constrainedLabels := make(map[string]*RegularExpression)
	re, err := CompileRegularExpression(`^world-`)
	if err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}
	constrainedLabels["hello"] = re

	settings := Settings{
		DeniedLabels:      mapset.NewThreadUnsafeSetFromSlice([]interface{}{"owner"}),
		ConstrainedLabels: constrainedLabels,
	}

	payload, err := kubewarden_testing.BuildValidationRequest(
		"test_data/ingress.json",
		&settings)
	if err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	responsePayload, err := validate(payload)
	if err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	var response kubewarden_testing.ValidationResponse
	if err := json.Unmarshal(responsePayload, &response); err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	if response.Accepted != false {
		t.Error("Unexpected accept response")
	}

	expected_message := "Label owner is on the deny list"
	if response.Message != expected_message {
		t.Errorf("Got '%s' instead of '%s'", response.Message, expected_message)
	}
}

The next test ensures a request is rejected when one of the user defined constraints is not satisfied:

func TestRejectionBecauseConstrainedLabelNotValid(t *testing.T) {
	constrainedLabels := make(map[string]*RegularExpression)
	re, err := CompileRegularExpression(`^cc-\d+$`)
	if err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}
	constrainedLabels["cc-center"] = re

	settings := Settings{
		DeniedLabels:      mapset.NewThreadUnsafeSetFromSlice([]interface{}{}),
		ConstrainedLabels: constrainedLabels,
	}

	payload, err := kubewarden_testing.BuildValidationRequest(
		"test_data/ingress.json",
		&settings)
	if err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	responsePayload, err := validate(payload)
	if err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	var response kubewarden_testing.ValidationResponse
	if err := json.Unmarshal(responsePayload, &response); err != nil {
		t.Errorf("Unexpected error: %+v", err)
	}

	if response.Accepted != false {
		t.Error("Unexpected accept response")
	}

	expected_message := "The value of cc-center doesn't pass user-defined constraint"
	if response.Message != expected_message {
		t.Errorf("Got '%s' instead of '%s'", response.Message, expected_message)
	}
}

We can now run all the unit tests, including the one defined inside of settings_test.go, by using this simple command:

make test

This will produce the following output:

go test -v
=== RUN   TestParseValidSettings
--- PASS: TestParseValidSettings (0.00s)
=== RUN   TestParseSettingsWithInvalidRegexp
--- PASS: TestParseSettingsWithInvalidRegexp (0.00s)
=== RUN   TestDetectValidSettings
--- PASS: TestDetectValidSettings (0.00s)
=== RUN   TestDetectNotValidSettingsDueToBrokenRegexp
--- PASS: TestDetectNotValidSettingsDueToBrokenRegexp (0.00s)
=== RUN   TestDetectNotValidSettingsDueToConflictingLabels
--- PASS: TestDetectNotValidSettingsDueToConflictingLabels (0.00s)
=== RUN   TestEmptySettingsLeadsToRequestAccepted
--- PASS: TestEmptySettingsLeadsToRequestAccepted (0.00s)
=== RUN   TestRequestAccepted
--- PASS: TestRequestAccepted (0.00s)
=== RUN   TestAcceptRequestWithConstraintLabel
--- PASS: TestAcceptRequestWithConstraintLabel (0.00s)
=== RUN   TestRejectionBecauseDeniedLabel
--- PASS: TestRejectionBecauseDeniedLabel (0.00s)
=== RUN   TestRejectionBecauseConstrainedLabelNotValid
--- PASS: TestRejectionBecauseConstrainedLabelNotValid (0.00s)
PASS
ok  	github.com/kubewarden/safe-labels-policy	0.001s

We can now move to the next step, write some end-to-end tests.

End-to-end testing

So far we have tested the policy using a set of Go unit tests. This section shows how we can write end-to-end test that run tests against the actual WebAssembly binary produced by TinyGo.

Prerequisites

These tools need to be installed on your development machine:

  • docker or another container engine: used to build the WebAssembly policy. We will rely on the compiler shipped within the official TinyGo container image.
  • bats: used to write the tests and automate their execution.
  • kwctl: CLI tool provided by Kubewarden to run its policies outside of Kubernetes, among other actions. This is covered in depth inside of this section of the documentation.

Building the policy

As a first step we need to build the policy, producing a WebAssembly binary.

This can be done with this simple command:

make wasm

This will pull the official TinyGo container image and run the build process inside of an ephemeral container.

The compilation produces a file called policy.wasm.

Writing tests

We are going to use bats to write and automate our tests. Each test will be composed by the following steps:

  1. Run the policy using kwctl.
  2. Perform some assertions against the output produced by the kwctl.

All the end-to-end tests are located inside of a file called e2e.bats. The scaffolded project already includes such a file. We will just change its contents to reflect how our policy behaves.

As a final note, the end-to-end tests we will use the same test fixtures files we previously used inside of the Go unit tests.

The first test ensures a request is approved when no settings are provided:

@test "accept when no settings are provided" {
  run kwctl run -r test_data/ingress.json policy.wasm

  # this prints the output when one the checks below fails
  echo "output = ${output}"

  # request is accepted
  [ $(expr "$output" : '.*"allowed":true.*') -ne 0 ]
}

We can execute the end-to-end tests by using this command:

make e2e-tests

This will produce the following output:

bats e2e.bats
 ✓ accept when no settings are provided

1 test, 0 failures

Let's write a test to ensure a request is approved when a user-defined constraint is respected:

@test "accept user defined constraint is respected" {
  run kwctl run  \
    -r test_data/ingress.json \
    --settings-json '{"constrained_labels": {"owner": "^team-"}}' \
    policy.wasm

  # this prints the output when one the checks below fails
  echo "output = ${output}"

  # request is accepted
  [ $(expr "$output" : '.*"allowed":true.*') -ne 0 ]
}

Next, we can write a test to ensure a request is accepted when none of the labels is on the deny list:

@test "accept labels are not on deny list" {
  run kwctl run \
    -r test_data/ingress.json \
    --settings-json '{"denied_labels": ["foo", "bar"]}' \
    policy.wasm

  # this prints the output when one the checks below fails
  echo "output = ${output}"

  # request is accepted
  [ $(expr "$output" : '.*"allowed":true.*') -ne 0 ]
}

Let's improve the test coverage by adding a test that rejects a request because one of the labels is on the deny list:

@test "reject because label is on deny list" {
  run kwctl run \
    -r test_data/ingress.json \
    --settings-json '{"denied_labels": ["foo", "owner"]}' \
    policy.wasm

  # this prints the output when one the checks below fails
  echo "output = ${output}"

  # request is rejected
  [ $(expr "$output" : '.*"allowed":false.*') -ne 0 ]
  [[ "$output" == *"Label owner is on the deny list"* ]]
}

The following test ensures a request is rejected when one of its labels doesn't satisfy the constraint provided by the user.

@test "reject because label doesn't pass validation constraint" {
  run kwctl run \
    -r test_data/ingress.json \
    --settings-json '{"constrained_labels": {"cc-center": "^cc-\\d+$"}}' \
    policy.wasm

  # this prints the output when one the checks below fails
  echo "output = ${output}"

  # request is rejected
  [ $(expr "$output" : '.*"allowed":false.*') -ne 0 ]
  [[ "$output" == *"The value of cc-center doesn't pass user-defined constraint"* ]]
}

We want to ensure settings' validation is working properly. This can be done with the following tests:

@test "fail settings validation because of conflicting labels" {
  run kwctl run \
    -r test_data/ingress.json \
    --settings-json '{"denied_labels": ["foo", "cc-center"], "constrained_labels": {"cc-center": "^cc-\\d+$"}}' \
    policy.wasm

  # this prints the output when one the checks below fails
  echo "output = ${output}"

  # settings validation failed
  [ $(expr "$output" : '.*"valid":false.*') -ne 0 ]
  [[ "$output" == *"Provided settings are not valid: These labels cannot be constrained and denied at the same time: Set{cc-center}"* ]]
}

@test "fail settings validation because of invalid constraint" {
  run kwctl run \
    -r test_data/ingress.json \
    --settings-json '{"constrained_labels": {"cc-center": "^cc-[12$"}}' \
    policy.wasm

  # this prints the output when one the checks below fails
  echo "output = ${output}"

  [[ "$output" == *"Provided settings are not valid: error parsing regexp: missing closing ]: `[12$`"* ]]
}

Conclusion

We have reached a pretty good level of coverage, let's run all the end-to-end tests:

$ make e2e-tests
bats e2e.bats
 ✓ accept when no settings are provided
 ✓ accept user defined constraint is respected
 ✓ accept labels are not on deny list
 ✓ reject because label is on deny list
 ✓ reject because label doesn't pass validation constraint
 ✓ fail settings validation because of conflicting labels
 ✓ fail settings validation because of invalid constraint

7 tests, 0 failures

Logging

The Go SDK integrates with the onelog project almost out of the box. The reasons why this library has been chosen are:

  • It works with WebAssembly binaries. Other popular logging solutions cannot even be built to WebAssembly.

  • It provides good performance.

  • It supports structured logging.

Initialize logger

You first have to initialize a logger structure. By performing this initialization in a global variable, you can easily log from the two main policy entry points: validate and validate_settings. Let's initialize this structure in our main package:

var (
	logWriter = kubewarden.KubewardenLogWriter{}
	logger    = onelog.New(
		&logWriter,
		onelog.ALL, // shortcut for onelog.DEBUG|onelog.INFO|onelog.WARN|onelog.ERROR|onelog.FATAL
	)
)

Consuming the logger

Now, we can use the logger object to log from wherever we need in our policy:

func validate(payload []byte) ([]byte, error) {
	// ...
	logger.Info("validating request")
	// ...
}

Let's add some structured logging:

func validate(payload []byte) ([]byte, error) {
	// ...
	logger.WarnWithFields("logging something important", func(e onelog.Entry) {
		e.String("one_field", "a value")
		e.String("another_field", "another value")
	})
	// ...
}

You can refer to the onelog documentation for more information.

The logging produced by the policy will be sent to the policy evaluator (kwctl or policy-server for example), and they will log on behalf of the policy using mechanisms that are easily pluggable to other components that enable distributed tracing, such as Jaeger.

Automations

This section describes how we can use GitHub Actions to automate as many tasks as possible.

The scaffolded project already includes all the GitHub actions you need. These Actions can be found in the .github/workflows/ci.yml.template file; rename it to github/workflows.ci/yml to enable them.

The same principles can be adapted to use a different CI system.

Testing

Automation of the unit tests and of the end-to-end tests is working out of the box thanks to the unit-tests and e2e-tests jobs defined in .github/workflows/ci.yml.template.

Release

The scaffolded project contains a release job in .github/workflows/ci.yml.template.

This job performs the following steps:

  • Checkout code
  • Build the WebAssembly policy
  • Push the policy to an OCI registry
  • Eventually create a new GitHub Release

To enable the job you need to rename it to ci.yml and change the value of the OCI_TARGET to match your preferences.

The job will act differently based on the commit that triggered its execution.

Regular commits will lead to the creation of an OCI artifact called <policy-name>:latest. No GitHub Release will be created for these commits.

On the other hand, creating a tag that matches the v* pattern, will lead to:

  1. Creation of an OCI artifact called <policy-name>:<tag>.
  2. Creation of a GitHub Release named Release <full tag name>. The release will include the following assets: the source code of the policy and the WebAssembly binary.

A concrete example

Let's assume we have a policy named safe-labels and we want to publish it as ghcr.io/kubewarden/policies/safe-labels.

The contents of the jobs.push-to-oci-registry.env section of ci.yml should look like this:

jobs:
  push-to-oci-registry:
    runs-on: ubuntu-latest
    env:
      WASM_BINARY_NAME: policy.wasm
      OCI_TARGET: ghcr.io/kubewarden/policies/safe-labels

Pushing a tag named v0.1.0 will lead to the creation and publishing of the OCI artifact called ghcr.io/kubewarden/policies/safe-labels:v0.1.0.

A GitHub Release named Release v0.1.0 will be created. The release will include the following assets:

  • Source code compressed as zip and tar.gz
  • A file named policy.wasm that is the actual WebAssembly policy

Distribute policy

Congratulations for having made this far 🎉🎉🎉

We hope you enjoyed the journey!

In case you haven't realized, we actually created the safe-labels-policy together.

There's nothing special to be done when it comes to distributing the policy. If you followed this guide you have already published your policy using the GitHub release.yml Action defined in the previous chapter.

The topic of distributing policies is covered in depth inside of the "distributing policies" section of Kubewarden's documentation.

Swift

As stated on the official website:

Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns.

The swift compiler doesn't yet have WebAssembly support, however the Swiftwasm provides a patched compiler with this capability.

The Swiftwasm team is also working to upstream all these changes into the Swift project. In the meantime the toolchain provided by the Swiftwasm project can be used to build Kubewarden policies.

Note well: you don't need an Apple system to write or run Swift code. Everything can be done also on a Linux machine or on Windows (by using Docker for Windows).

Current State

Policy authors can leverage the following resources:

  • Kubewarden Swift SDK: this provides a set of struct and functions that simplify the process of writing policies.
  • Kubewarden Swift template project: use this template to quickly scaffold a Swift-based policy. The template comes with a working policy and a set of GitHub Actions to automate its lifecycle.

No severe limitations have been found inside of Swift, only some minor glitches:

  • It's critical to perform some post-build optimizations before using the policy "in production":
    1. Strip the Wasm module via wasm-strip to reduce its size
    2. Optimize the Wasm module via wasm-opt

The GitHub Action provided by the template repository already takes care of that.

More examples

This GitHub repository contains a Kubewarden Policy written in Swift.

TypeScript

As stated on the official website:

TypeScript extends JavaScript by adding types.

By understanding JavaScript, TypeScript saves you time catching errors and providing fixes before you run code.

TypeScript cannot be converted to WebAssembly, however AssemblyScript is a subset of TypeScript designed explicitly for WebAssembly.

Current State

Currently there's currently no Kubewarden SDK for AssemblyScript, we haven't created it bacause of lack of time. We will do that in the near future.

In the meantime, there seem to be some limitatations affecting AssemblyScript:

  • There's no built-in way to Serialize and Deserilize classed to and from JSON. See this issue
  • It seems there's no JSON path library for AssemblyScript

Example

This GitHub repository contains a Kubewarden Policy written in AssemblyScript.

Worth of note: this repository has a series of GitHub Actions that automate the following tasks:

  • Run unit tests and code linting on pull requests and after code is merged into the main branch
  • Build the policy in release mode and push it to a OCI registry as an artifact

Distributing Policies

Kubewarden policies are Wasm binaries that are evaluated by the Kubewarden Policy Server.

The Kubewarden policy server can load policies from these sources:

  • Local filesystem
  • HTTP(s) server
  • OCI compliant registry like Distribution and other container registries (GitHub container registry, Azure Container Registry, Amazon ECR, Google Container Registry, ...)

We think distributing Kubewarden policies via a regular OCI compliant registry is the best choice. Container registries are basically a mandatory requirement for any Kubernetes cluster. Having a single place to store, and secure, all the artifacts required by a cluster can be really handy.

Pushing policies to an OCI compliant registry

The OCI Artifacts specification allows to store any kind of binary blob inside of a regular OCI compliant container registry.

The target OCI compliant registry must support artifacts in order to successfully push a Kubewarden Policy to it.

The kwctl command line tool can be used to push a Kubewarden Policy to an OCI compliant registry.

Annotating the policy

Annotating a policy is done by the kwctl CLI tool as well. The process of annotating a Kubewarden policy is done by adding WebAssembly custom sections to the policy binary. This means that the policy metadata travels with the policy itself.

The kwctl annotate command needs two main inputs:

  • The Kubewarden policy to be annotated, in the form of a local file in the filesystem.

  • The annotations file, a file containing a YAML with the policy metadata. This file is located somewhere in your filesystem, usually in the root project of your policy.

An example follows; we save this file as metadata.yml in the current directory:

rules:
- apiGroups: ["*"]
  apiVersions: ["*"]
  resources: ["*"]
  operations: ["*"]
mutating: false
annotations:
  io.kubewarden.policy.title: palindromify
  io.kubewarden.policy.description: Allows you to reject palindrome names in resources and namespace names, or to only accept palindrome names
  io.kubewarden.policy.author: Name Surname <name.surname@example.com>
  io.kubewarden.policy.url: https://github.com/<org>/palindromify
  io.kubewarden.policy.source: https://github.com/<org>/palindromify
  io.kubewarden.policy.license: Apache-2.0
  io.kubewarden.policy.usage: |
    This is markdown text and as such allows you to define a free form usage text.

    This policy allows you to reject requests if:
    - The name of the resource is a palindrome name.
    - The namespace name where this resource is created has a palindrome name.

    This policy accepts the following settings:

    - `invert_behavior`: bool that inverts the policy behavior. If enabled, only palindrome names will be accepted.

Now, let's annotate the policy:

$ kwctl annotate policy.wasm \
    --metadata-path metadata.yml \
    --output-path annotated-policy.wasm

This process performs some optimizations on the policy, so it's not uncommon to end up with a smaller annotated policy than the original one. This depends a lot on the toolchain that was used to produce the unannotated WebAssembly object.

You can check with kwctl inspect that everything looks correct:

$ kwctl inspect annotated-policy.wasm
# here you will see a colored output of the metadata you provided on the `metadata.yml` file. This information is now read from the WebAssembly custom sections

Pushing the policy

Pushing an annotated policy can be done in this way:

$ kwctl push annotated-policy.wasm \
              <oci-registry>/kubewarden-policies/palindromify-policy:v0.0.1

It is discouraged to push unannotated policies. This is why by default kwctl push will reject to push such a policy to an OCI registry. If you really want to push an unannotated policy you can use the --force flag of kwctl push.

The policy can then be referenced from the Kubewarden Policy Server or kwctl as registry://<oci-registry>/kubewarden-policies/palindromify-policy:v0.0.1.

Custom Certificate Authorities

Both kwctl and policy-server allow you to pull policies from OCI registries and HTTP servers, as well as pushing to OCI registries. In this process, by default, HTTPS is enforced with host TLS verification.

The system CA store is used to validate the trusted chain of certificates presented by the OCI registry. In a regular Kubewarden installation, the policy-server will use the CA store shipped with its Linux container. In the client side, kwctl will use your operating system CA store.

If you are using the Kubewarden Controller, you can configure the PolicyServer via their spec fields, as documented here.

Important: the default behavior of kwctl and policy-server is to enforce HTTPS with trusted certificates matching the system CA store. You can interact with registries using untrusted certificates or even without TLS, by using the insecure_sources setting. This approach is highly discouraged in environments closer to production.

The sources.yaml file

The pull and push behavior of kwctl and policy-server can be tuned via the sources.yaml file.

This file can be provided both to kwctl and the policy-server in the --sources-path argument. Its structure is as follows:

insecure_sources:
  - "registry-dev.example.com"
  - "registry-dev2.example.com:5500"
source_authorities:
  "registry-pre.example.com":
    - type: Path
      path: /opt/example.com/pki/ca-pre1-1.pem
    - type: Path
      path: /opt/example.com/pki/ca-pre1-2.der
  "registry-pre2.example.com:5500":
    - type: Data
      data: |
            -----BEGIN CERTIFICATE-----
            ca-pre2 PEM cert
            -----END CERTIFICATE-----

This file can be provided in YAML or JSON format. All keys are optional, so the following are also valid sources.yaml files:

insecure_sources: ["dev.registry.example.com"]

As well as:

{
    "source_authorities": {
        "host.k3d.internal:5000": [
            {"type": "Data","data":"pem cert 1"},
            {"type": "Data","data":"pem cert 2"}
        ]
    }
}

Insecure sources

Hosts listed in the insecure_sources configuration behave in a different way than hosts that are not listed.

  • Unlisted hosts (default)

    • Try to connect using HTPS, verifying the server identity. If the connection fails, operation is aborted.
  • Listed hosts

    • Try to connect using HTTPS verifying the server identity. If the connection fails,
    • Try to connect using HTTPS, skipping host verification. If the connection fails,
    • Try to connect using HTTP. If the connection fails, operation is aborted.

It is generally fine to use insecure_sources when using local registries or HTTP servers when developing locally, to avoid the certificate burden. However, this setting is completely discouraged for environments closer to production.

Source authorities

The source_authorities is a map that contains URIs and a list of CA certificates that form a certificate chain for that URI, used to verify the identity of OCI registries and HTTPs servers.

These certificates can be encoded in PEM or DER format. Certificates in binary DER format can be provided as a path to a file containing the certificate, meanwhile certificates in PEM format can either by a path to the certificate file, or a string with the certificate contents. Both possibilities are specified via a type key:

source_authorities:
  "registry-pre.example.com":
    - type: Path
      path: /opt/example.com/pki/ca-pre1-1.pem
    - type: Path
      path: /opt/example.com/pki/ca-pre1-2.der
    - type: Data
      data: |
            -----BEGIN CERTIFICATE-----
            ca-pre1-3 PEM cert
            -----END CERTIFICATE-----
  "registry-pre2.example.com:5500":
    - type: Path
      path: /opt/example.com/pki/ca-pre2-1.der

OCI Registries support

Note well: this is not an exhaustive list. If a registry you know or use is working correctly with Kubewarden, or if any information described here is not accurate at this time, please open a Pull Request against this documentation to fix it.

Kubewarden policies are distributed as OCI Artifacts using regular OCI Registries.

Policies are stored side by side with container images. They don't require any extra setup or maintenance than regular container images do.

Projects

Hosted registries

Known issues

Docker Hub

The Docker Hub does not support OCI artifacts at this time, and as such, it cannot be used to store Kubewarden policies.

JFrog

Although JFrog supports OCI artifacts, it is only partially possible to push to it when following the specification. Read more here

Testing Policies

This section covers the topic of testing Kubewarden Policies. There are two possible personas interested in testing policies:

  • As a policy author: you're writing a Kubewarden Policy and you want to ensure your code behaves the way you expect.
  • As an end user: you found a Kubewarden Policy and you want to tune/test the policy settings before deploying it, maybe you want to keep testing these settings inside of your CI/CD pipelines,...

The next sections of the documentation will show how Kubewarden policies can be tested by these two personas.

While creating a policy

Kubewarden Policies are regular programs compiled as WebAssembly. As with any kind of program, it's important to have good test coverage.

Policy authors can leverage the testing frameworks and tools of their language of choice to verify the behaviour of their policies.

As an example, you can take a look at these Kubewarden policies:

All these policies have integrated test suites built using the regular testing libraries of Rust, Go and AssemblyScript.

Finally, all these projects rely on GitHub Actions to implement their CI pipelines.

End-to-end tests

As a policy author you can also write tests that are executed against the actual WebAssembly binary containing your policy. This can be done without having to deploy a Kubernetes cluster by using these tools:

  • bats: used to write the tests and automate their execution.
  • kwctl: Kubewarden go-to CLI tool that helps you with policy related operations such as pull, inspect, annotate, push and run.

kwctl run usage is quite simple, we just have to invoke it with the following data as input:

  1. WebAssembly binary file reference of the policy to be run. The Kubewarden policy can be loaded from the local filesystem (file://), an HTTP(s) server (https://) or an OCI registry (registry://).
  2. The admission request object to be evaluated. This is provided via the --request-path argument. The request can be provided through stdin by setting --request-path to -.
  3. The policy settings to be used at evaluation time, they can be provided as an inline JSON via --settings-json flag, or a JSON or YAML file loaded from the filesystem via --settings-path.

Once the policy evaluation is done, kwctl prints the ValidationResponse object to the standard output.

For example, this is how kwctl can be used to test the WebAssembly binary of the ingress-policy linked above:

$ curl https://raw.githubusercontent.com/kubewarden/ingress-policy/v0.1.8/test_data/ingress-wildcard.json 2> /dev/null | \
    kwctl run \
        --settings-json '{"allowPorts": [80], "denyPorts": [3000]}' \
        --request-path - \
        registry://ghcr.io/kubewarden/policies/ingress:v0.1.8 | jq

Using bats we can can write a test that runs this command and looks for the expected outputs:

@test "all is good" {
  run kwctl run \
    --request-path test_data/ingress-wildcard.json \
    --settings-json '{"allowPorts": [80], "denyPorts": [3000]}' \
    ingress-policy.wasm

  # this prints the output when one the checks below fails
  echo "output = ${output}"

  # settings validation passed
  [[ "$output" == *"valid: true"* ]]

  # request accepted
  [[ "$output" == *"allowed: true"* ]]
}

We can copy the snippet from above inside of a file called e2e.bats, and then invoke bats in this way:

$ bats e2e.bats
 ✓ all is good

1 tests, 0 failures

Checkout this section of the documentation to learn more about writing end-to-end tests of your policies.

Before deployment

As a Kubernetes cluster operator you probably want to perform some tests against a Kubewarden policy you just found.

You probably want to answer questions like:

  • What are the correct policy settings to get the validation/mutation outcome I desire?
  • How can I be sure everything will keep working as expected when I upgrade the policy to a newer version, when I add/change some Kubernetes resources, when I change the configuration parameters of the policy,...

Kubewarden has a dedicated utility that allows testing of the policies outside of Kubernetes, among other operations. This utility is called kwctl.

kwctl usage is quite simple, we just have to invoke it with the following data as input:

  1. WebAssembly binary file reference of the policy to be run. The Kubewarden policy can be loaded from the local filesystem (file://), an HTTP(s) server (https://) or an OCI registry (registry://).
  2. The admission request object to be evaluated. This is provided via the --request-path argument. The request can be provided through stdin by setting --request-path to -.
  3. The policy settings to be used at evaluation time, they can be provided as an inline JSON via --settings-json flag, or a JSON or YAML file loaded from the filesystem via --settings-path.

Once the policy evaluation is done, kwctl prints the ValidationResponse object to the standard output.

Install

You can download pre-built binaries of kwctl from here.

Quickstart

This section describes how to test the psp-apparmor policy with different configurations and validation request objects as input data.

Create AdmissionReview requests

We have to create some files holding the AdmissionReview objects that will be evaluated by the policy.

Let's create a file named pod-req-no-specific-apparmor-profile.json with the following contents:

{
  "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
  "kind": {
    "kind": "Pod",
    "version": "v1"
  },
  "object": {
    "metadata": {
      "name": "no-apparmor"
    },
    "spec": {
      "containers": [
        {
          "image": "nginx",
          "name": "nginx"
        }
      ]
    }
  },
  "operation": "CREATE",
  "requestKind": {"version": "v1", "kind": "Pod"},
  "userInfo": {
    "username": "alice",
    "uid": "alice-uid",
    "groups": ["system:authenticated"]
  }
}

This request tries to create a Pod that doesn't specify any AppArmor profile to be used, that's because it doesn't have an annotation with the container.apparmor.security.beta.kubernetes.io/<name of the container> key.

Let's create a file named pod-req-apparmor-unconfined.json with the following contents:

{
  "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
  "kind": {
    "kind": "Pod",
    "version": "v1"
  },
  "object": {
    "metadata": {
      "name": "privileged-pod",
      "annotations": {
        "container.apparmor.security.beta.kubernetes.io/nginx": "unconfined"
      }
    },
    "spec": {
      "containers": [
        {
          "image": "nginx",
          "name": "nginx"
        }
      ]
    }
  },
  "operation": "CREATE",
  "requestKind": {"version": "v1", "kind": "Pod"},
  "userInfo": {
    "username": "alice",
    "uid": "alice-uid",
    "groups": ["system:authenticated"]
  }
}

This request tries to create a Pod with a container called nginx that runs with the unconfined AppArmor profile. Note well, running in unconfined mode is a bad security practice.

Finally, let's create a file named pod-req-apparmor-custom.json with the following contents:

{
  "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
  "kind": {
    "kind": "Pod",
    "version": "v1"
  },
  "object": {
    "metadata": {
      "name": "privileged-pod",
      "annotations": {
        "container.apparmor.security.beta.kubernetes.io/nginx": "localhost/nginx-custom"
      }
    },
    "spec": {
      "containers": [
        {
          "image": "nginx",
          "name": "nginx"
        }
      ]
    }
  },
  "operation": "CREATE",
  "requestKind": {"version": "v1", "kind": "Pod"},
  "userInfo": {
    "username": "alice",
    "uid": "alice-uid",
    "groups": ["system:authenticated"]
  }
}

Note well: these are stripped down AdmissionReview objects, we left only the fields that are relevant to our policy.

Test the policy

Now we can use kwctl to test the creation of a Pod that doesn't specify an AppArmor profile:

$ kwctl run \
    --request-path pod-req-no-specific-apparmor-profile.json \
    registry://ghcr.io/kubewarden/policies/psp-apparmor:v0.1.4 | jq

The policy will accept the request and produce the following output:

{
  "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
  "allowed": true
}

The policy will instead reject the creation of a Pod with an unconfined AppArmor profile:

$ kwctl run \
    --request-path pod-req-apparmor-unconfined.json \
    registry://ghcr.io/kubewarden/policies/psp-apparmor:v0.1.4 | jq
{
  "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
  "allowed": false,
  "status": {
    "message": "These AppArmor profiles are not allowed: [\"unconfined\"]"
  }
}

Both times we ran the policy without providing any kind of setting. As the policy's documentation states, this results in preventing the usage of non-default profiles.

As a matter of fact, the Pod using a custom nginx profile gets rejected by the policy too:

$ kwctl run \
    --request-path pod-req-apparmor-custom.json \
    registry://ghcr.io/kubewarden/policies/psp-apparmor:v0.1.4 | jq
{
  "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
  "allowed": false,
  "status": {
    "message": "These AppArmor profiles are not allowed: [\"localhost/nginx-custom\"]"
  }
}

We can change the default behaviour and allow some chosen AppArmor to be used:

$ kwctl run \
    --request-path pod-req-apparmor-custom.json \
    --settings-json '{"allowed_profiles": ["runtime/default", "localhost/nginx-custom"]}' \
    registry://ghcr.io/kubewarden/policies/psp-apparmor:v0.1.4 | jq

This time the request is accepted:

{
  "uid": "1299d386-525b-4032-98ae-1949f69f9cfc",
  "allowed": true
}

Automation

All these steps shown above can be automated using bats.

We can write a series of tests and integrate their execution inside of your existing CI and CD pipelines.

That would ensure changes to the policy version, policy configuration parameters, Kubernetes resources,... won't break the outcome of the validation/mutation operations.

The commands used above can be easily "wrapped" into a bats test:

@test "all is good" {
  run kwctl run \
    --request-path pod-req-no-specific-apparmor-profile.json \
    registry://ghcr.io/kubewarden/policies/psp-apparmor:v0.1.4

  # this prints the output when one the checks below fails
  echo "output = ${output}"

  # request accepted
  [ $(expr "$output" : '.*"allowed":true.*') -ne 0 ]
}

@test "reject" {
  run kwctl run \
    --request-path pod-req-apparmor-custom.json \
    registry://ghcr.io/kubewarden/policies/psp-apparmor:v0.1.4

  # this prints the output when one the checks below fails
  echo "output = ${output}"

  # request rejected
  [ $(expr "$output" : '.*"allowed":false.*') -ne 0 ]
}

Assuming the snippet from above is inside of a file called e2e.bats, we can run the test in this way:

$ bats e2e.bats
 ✓ all is good
 ✓ reject

2 tests, 0 failures

Checkout this section of the documentation to learn more about writing end-to-end tests of your policies.

Operator Manual

This section covers topics related with the deployment and the operational aspects of Kubewarden.

Configuring PolicyServers

Custom Certificate Authorities for Policy registries

It is possible to specify and configure the Certificate Authorities that a PolicyServer uses when pulling the ClusterAdmissionPolicy artifacts from the policy registry. The following spec fields will configure the deployed policy-server executable to that effect.

Insecure sources

Important: the default behavior of kwctl and policy-server is to enforce HTTPS with trusted certificates matching the system CA store. You can interact with registries using untrusted certificates or even without TLS, by using the insecure_sources setting. This approach is highly discouraged for environments closer to production.

To configure the PolicyServer to accept insecure connections to specific registries, use the spec.insecureSources field of PolicyServer. This field accepts a list of URIs to be regarded as insecure. Example:

spec:
  insecureSources:
    - localhost:5000
    - host.k3d.internal:5000

See here for more information on how the policy-server executable treats them.

Custom Certificate Authorities

To configure the PolicyServer with a custom certificate chain of 1 or more certificates for a specific URI, use the field spec.sourceAuthorities.

This field is a map of URIs, each with its own list of strings that contain PEM encoded certificates. Example:

spec:
  sourceAuthorities:
    "registry-pre.example.com":
      - |
        -----BEGIN CERTIFICATE-----
        ca-pre1-1 PEM cert
        -----END CERTIFICATE-----
      - |
        -----BEGIN CERTIFICATE-----
        ca-pre1-2 PEM cert
        -----END CERTIFICATE-----
    "registry-pre2.example.com:5500":
      - |
        -----BEGIN CERTIFICATE-----
        ca-pre2 PEM cert
        -----END CERTIFICATE-----

See here for more information on how the policy-server executable treats them.

Telemetry

OpenTelemetry

OpenTelemetry is a Cloud Native Computing Foundation framework for observability. It enables your microservices to provide metrics, logs and traces.

Kubewarden's components are instrumented with the OpenTelemetry SDK, reporting data to an OpenTelemetry collector -- called the agent.

By following this documentation, we will integrate OpenTelemetry using the following architecture:

  • Each Pod of the Kubewarden stack will have a OpenTelemetry sidecar.
  • The sidecar receives tracing and monitoring information from the Kubewarden component via the OpenTelemetry Protocol (OTLP)
  • The OpenTelemetry collector will:
    • Send the trace events to a central Jaeger instance
    • Expose Prometheus metrics on a specific port

For more information about the other deployment modes, please refer to the OpenTelemetry official documentation.

Let's first deploy OpenTelemetry in a Kubernetes cluster, so we can reuse it in the next sections that will address specifically tracing and metrics.

Setting up a Kubernetes cluster

This section gives step-by-step instructions to create a Kubernetes cluster with an ingress controller enabled.

Feel free to skip this section if you already have a Kubernetes cluster where you can define Ingress resources.

We are going to create a testing Kubernetes cluster using minikube.

minikube has many backends, in this case we will use the kvm2 driver which relies on libvirt.

Assuming libvirtd is properly running on your machine, issue the following command:

minikube start --driver=kvm2

The command will produce an output similar to the following one:

$ minikube start --driver=kvm2
😄  minikube v1.23.2 on Opensuse-Leap 15.3
✨  Using the kvm2 driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.22.2 on Docker 20.10.8 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Now we have to enable the Ingress addon:

minikube addons enable ingress

This will produce an output similar to the following one:

$ minikube addons enable ingress
    ▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v1.0.0-beta.3
    ▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled

Install OpenTelemetry

We are going to use the OpenTelemetry Operator to manage the automatic injection of the OpenTelemetry Collector sidecar inside of the PolicyServer pod.

The OpenTelemetry Operator requires cert-manager to be installed inside of the cluster.

This can be done with this command:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml
kubectl wait --for=condition=Available deployment --timeout=2m -n cert-manager --all

Once cert-manager is up and running, the OpenTelemetry operator can be installed in this way:

kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml
kubectl wait --for=condition=Available deployment --timeout=2m -n opentelemetry-operator-system --all

OpenTelemetry integration

We can now move to the next chapters of the book to enable application metrics (via Prometheus integration) and application tracing (via Jaeger integration).

Metrics

This section describes how to enable metrics reporting on the Policy Server.

Note well: before continuing, make sure you completed the previous OpenTelemetry section of this book. It is required for this section to work correctly.

We are going to use Prometheus to scrape metrics exposed by the Policy Server.

Install Prometheus

We will use the Prometheus Operator, that allows us to intuitively define Prometheus' Targets.

There are many ways to install and set up Prometheus. For ease of deployment, we will use the Prometheus community helm chart.

Let's add the helm repository from the Prometheus Community:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

Now, let's install the kube-prometheus-stack chart. This chart contains a collection of Kubernetes manifests, Grafana dashboards, and Prometheus rules.

Let's create a kube-prometheus-stack-values.yaml file to configure the kube-prometheus-stack Helm chart values with the following contents:

prometheus:
  additionalServiceMonitors:
    - name: kubewarden
      selector:
        matchLabels:
          app: kubewarden-policy-server-default
      namespaceSelector:
        matchNames:
          - kubewarden
      endpoints:
        - port: metrics
          interval: 10s

The prometheus-operator deployed as part of this Helm chart defines the concept of Service Monitors, used to declaratively define which services should be monitored by Prometheus.

In our case, we are adding a Service monitor targeting the kubewarden namespace, for services that match labels app=kubewarden-policy-server-default. This way, the Prometheus Operator is able to inspect which Kubernetes Endpoints are tied to services matching this conditions. The operator will then take care of generating a valid configuration file for Prometheus, and reloading it automatically after updating its configuration file.

helm install --wait --create-namespace --namespace prometheus --values kube-prometheus-stack-values.yaml prometheus prometheus-community/kube-prometheus-stack

Install Kubewarden

We can now install Kubewarden in the recommended way with the Helm chart.

Note well: cert-manager is a requirement of Kubewarden, and OpenTelemetry is required for this feature, but we've already installed them in a previous section of this book.

As a first step, we have to add the Helm repository that contains Kubewarden:

helm repo add kubewarden https://charts.kubewarden.io

Then we have to install the Custom Resource Definitions (CRDs) defined by Kubewarden:

helm install --wait --namespace kubewarden --create-namespace kubewarden-crds kubewarden/kubewarden-crds

Now we can deploy the rest of the Kubewarden stack. The official helm chart will create a PolicyServer named default.

Let's configure the values of the Helm Chart so that we have metrics enabled in Kubewarden. Write the kubewarden-values.yaml file with the following contents:

telemetry:
  enabled: True
policyServer:
    metrics:
      port: 8080

Now, let's install the helm chart:

helm install --wait --namespace kubewarden --values kubewarden-values.yaml kubewarden-controller kubewarden/kubewarden-controller

This leads to the creation of the default instance of PolicyServer:

kubectl get policyservers.policies.kubewarden.io
NAME      AGE
default   3m7s

By default, this policy server will have metrics enabled.

Accessing Prometheus

Prometheus exposes a very simple UI that we can use to inspect metrics exposed by different components within our Kubernetes cluster.

We can forward the Prometheus port so we can easily access it.

kubectl port-forward -n prometheus --address 0.0.0.0 svc/prometheus-operated 9090

Now, we can visit prometheus through port 9090 and perform a query, for example: kubewarden_policy_evaluations_total. We will see that the number of evaluations will grow over time as we produce more requests that go through the policy.

Accessing Grafana

We can forward the Grafana service so we can easily access it.

kubectl port-forward -n prometheus --address 0.0.0.0 svc/prometheus-grafana 8080:80

You can now login with the default username admin and password prom-operator.

Using Kubewarden Grafana dashboard

The Kubewarden developers made available a Grafana dashboard with some basic metrics that give an overview about how Kubewarden behaves inside of cluster. This dashboard is available in the Kubewarden repository in a JSON file or in the Grafana website.

To import the dashboard into your environment, you can download the JSON file from the Grafana website or from the repository:

curl https://raw.githubusercontent.com/kubewarden/policy-server/main/kubewarden-dashboard.json

Once you have the file in your machine you should access the Grafana dashboard and import it. Visit /dashboard/import in the Grafana dashboard and follow these steps:

  1. Copy the JSON file contents and paste them into the Import via panel json box in the Grafana UI
  2. Click the Load button
  3. Choosing Prometheus as the source
  4. Click the Import button

Another option is import it directly from the Grafana.com website. For this:

  1. Copy the dashboard ID from the dashboard page,
  2. Paste it in the Import via grafana.com field
  3. Click the load button.
  4. After importing the dashboard, define the Prometheus data source to use and finish the import process.

You should be able to see the dashboard similar to this:

Dashboard 1 Dashboard 2 Dashboard 3 Dashboard 4

The Grafana dashboard has panes showing the state of all the policies managed by Kubewarden. Plus it has policy-specific panels.

Policy detailed metrics can be obtained by changing the value of the policy_name variable to match the name of the desired policy.

Metrics Reference

Kubewarden exposes some relevant metrics that enhance visibility of the platform, and allows cluster administrators and policy developers to identify patterns and potential issues.

Policy Server

The Policy Server is the component that initializes and runs policies. Upon receiving requests from the Kubernetes API server, it will forward the request to the policy, and return the response provided by the policy to the Kubernetes API server.

Metrics

Note: Baggage are key-value attributes added to the metric. They are used to enrich the metric with additional information.

NameType
kubewarden_policy_evaluations_totalCounterBaggage

kubewarden_policy_evaluations_total

Baggage
LabelDescription
policy_nameName of the policy
resource_nameName of the evaluated resource
resource_kindKind of the evaluated resource
resource_namespaceNamespace of the evaluated resource. Not present if the resource is cluster scoped.
resource_request_operationOperation type: CREATE, UPDATE, DELETE, PATCH, WATCH...
acceptedWhether the request was accepted or not
mutatedWhether the request was mutated or not
error_codeError code returned by the policy in case of rejection, if any. Not present if the policy didn't provide one.

Tracing

This section illustrates how to enable tracing support of Policy Server.

Note well: before continuing, make sure you completed the previous OpenTelemetry section of this book. It is required for this section to work correctly.

Tracing allows to collect fine grained details about policy evaluations. It can be a useful tool for debugging issues inside of your Kubewarden deployment and policies.

We will use Jaeger -- used to receive, store and visualize trace events.

Install Jaeger

We are going to use the Jaeger Operator to manage all the different Jaeger components.

The operator can be installed in many ways, we are going to use its helm chart.

As a first step, we need to add the helm repository containing the Jaeger Operator charts:

helm repo add jaegertracing https://jaegertracing.github.io/helm-charts

Then we install the operator inside of a dedicated Namespace called jaeger:

helm install --namespace jaeger --create-namespace jaeger-operator jaegertracing/jaeger-operator

This will produce an output similar to the following one:

helm install --namespace jaeger --create-namespace jaeger-operator jaegertracing/jaeger-operator
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
NAME: jaeger-operator
LAST DEPLOYED: Tue Sep 28 14:54:02 2021
NAMESPACE: jaeger
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
jaeger-operator is installed.


Check the jaeger-operator logs
  export POD=$(kubectl get pods -l app.kubernetes.io/instance=jaeger-operator -lapp.kubernetes.io/name=jaeger-operator --namespace jaeger --output name)
  kubectl logs $POD --namespace=jaeger

Given this is a testing environment, we will use default "AllInOne" strategy. As stated on the upstream documentation: this deployment strategy is meant to be used only for development, testing and demo purposes.

Note well: the operator can deploy Jaeger in many different ways. We strongly recommend to read its official documentation.

Let's create a Jaeger resource:

kubectl apply -f - <<EOF
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
  name: all-in-one
  namespace: jaeger
spec:
  ingress:
    enabled: true
    annotations:
      kubernetes.io/ingress.class: nginx
EOF

Once all the resources have been created by the Jaeger operator, the Jaeger Query UI will be reachable at the following address:

echo http://`minikube ip`

Install Kubewarden

We can proceed to the deployment of Kubewarden in the usual way.

Note well: cert-manager is a requirement of Kubewarden, and OpenTelemetry is required for this feature, but we've already installed them in a previous section of this book.

As a first step, we have to add the Helm repository that contains Kubewarden:

helm repo add kubewarden https://charts.kubewarden.io

Then we have to install the Custom Resource Definitions (CRDs) defined by Kubewarden:

helm install --wait --namespace kubewarden --create-namespace kubewarden-crds kubewarden/kubewarden-crds

Now we can deploy the rest of the Kubewarden stack. The official helm chart will create a PolicyServer named default. We want this PolicyServer instance to have tracing enabled.

In order to do that, we have to specify some extra values at installation time. Let's create a values.yaml file with the following contents:

policyServer:
  telemetry:
    enabled: True
    tracing:
      jaeger:
        endpoint: "all-in-one-collector.jaeger.svc.cluster.local:14250"

Then we can proceed with the installation of the helm chart:

helm install --wait --namespace kubewarden --create-namespace --values values.yaml kubewarden-controller kubewarden/kubewarden-controller

This leads to the creation of the default instance of PolicyServer:

kubectl get policyservers.policies.kubewarden.io
NAME      AGE
default   3m7s

Looking closer at the Pod running the PolicyServer instance, we will find it has two containers inside of it: the actual policy-server and the OpenTelemetry Collector sidecar otc-container.

Enforcing a policy

We will start by deploying the safe-labels policy.

We want the policy to be enforced only inside of Namespaces that have a label environment with value production.

Let's create a Namespace that has such a label:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: team-alpha-prod
  labels:
    environment: production
EOF

Next, let's define a ClusterAdmissionPolicy:

kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
  name: safe-labels
spec:
  module: registry://ghcr.io/kubewarden/policies/safe-labels:v0.1.6
  settings:
    mandatory_labels:
    - owner
  rules:
    - apiGroups:
        - apps
      apiVersions:
        - v1
      resources:
        - deployments
      operations:
        - CREATE
        - UPDATE
  namespaceSelector:
    matchExpressions:
    - key: environment
      operator: In
      values: ["production"]
  mutating: false
EOF

We can wait for the policy to be active in this way:

kubectl wait --for=condition=PolicyActive clusteradmissionpolicy/safe-labels

Once the policy is active, we can try it out in this way:

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: team-alpha-prod
  labels:
    owner: flavio
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 0
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
EOF

This Deployment object will be created because it doesn't violate the policy.

On the other hand, this Deployment will be blocked by the policy:

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-without-labels
  namespace: team-alpha-prod
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 0
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
EOF

The policy is not enforced inside of another Namespace.

The following command creates a new Namespace called team-alpha-staging:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: team-alpha-staging
  labels:
    environment: staging
EOF

As expected, the creation of a Deployment resource that doesn't have any label is allowed inside of the team-alpha-staging Namespace:

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-without-labels
  namespace: team-alpha-staging
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 0
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
EOF

As expected, this resource is successfully created.

Exploring the Jaeger UI

We can see the trace events have been sent by the PolicyServer instance to Jaeger:

Jaeger homepage

The Jaeger collector is properly receiving the traces generated by our PolicyServer.