Roberto Carratalá bio photo

Roberto Carratalá

Linux Geek. Devops & Kubernetes enthusiast. SSA @ Red Hat.

LinkedIn Github

How can we improve my supply chain security signing my container images, in an open, accessible and transparent manner? How can we store this signatures in a safe and organized way?

And then how can we ensure that no one can deploy in my Kubernetes clusters malicious images that can be a risk to my entire software supply chain?

Let’s dig in!

Overview

The answer to securing software supply chains lies in digitally-signing the various artifacts that comprise applications, from binaries and containers to aggregated files (like tarballs) and software-bills-of-materials (SBOM).

Digital signatures effectively “freeze” an object in time, indicating that in its current state it is verified to be what it says it is and that it hasn’t been altered in any way.

Sigstore offers a method to better secure software supply chains in an open, transparent and accessible manner.

Think in Sigstore as a new standard for signing, verifying and protecting software, automating how you digitally sign and check components, for a safer chain of custody tracing software back to the source.

So with Sigstore, we can generate the key pairs needed to sign and verify artifacts, automating as much as possible so there’s no risk of losing or leaking them.

On the other hand, you can use benefit also of the Transparent ledger technology, this means that anyone can find and verify signatures, and check whether someone’s changed the source code, the build platform or the artifact repository.

But wait a moment, we can sign the images, but how we can ensure that in our clusters only signed images that are valid and verified can be deployed? Manually?

Let’s bring Kyverno to the stage!

Kyverno is a policy engine designed for Kubernetes. With Kyverno, policies are managed as Kubernetes resources and no new language is required to write policies.

This allows using familiar tools such as kubectl, git, and kustomize to manage policies. Kyverno policies can validate, mutate, and generate Kubernetes resources. The Kyverno CLI can be used to test policies and validate resources as part of a CI/CD pipeline.

In a nutshell, Kyverno offers an image verification that uses the Cosign component from the Sigstore project.

Let’s start to have fun with Sigstore, Tekton, Kyverno and Kubernetes!

NOTE: this blog post uses Tekton but any CICD can be used, like Jenkins, GitHub Actions, etc.

1. Installing OpenShift Pipelines / Tekton in Kubernetes / OpenShift

  • Option 1 - Install OpenShift Pipelines in OpenShift using the Operator:
cat bootstrap/openshift-pipelines-operator-subscription.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-pipelines-operator-rh
  namespace: openshift-operators
spec:
  channel: stable
  installPlanApproval: Automatic
  name: openshift-pipelines-operator-rh
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  • Apply the OpenShift Pipelines Subscription in the cluster:
kubectl apply bootstrap/openshift-pipelines-operator-subscription.yaml


  • Check that the OpenShift Pipelines are properly installed:
kubectl get csv | grep pipelines

openshift-pipelines-operator-rh.v1.6.2   Red Hat OpenShift Pipelines   1.6.2     redhat-openshift-pipelines.v1.5.2   Succeeded
  • Option 2 - Install Tekton pipelines (upstream) in Kubernetes vanilla:
kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml

kubectl wait -n tekton-pipelines \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/part-of=tekton-pipelines,app.kubernetes.io/component=controller \
  --timeout=90s
  • Install Tekton Dashboard to check the Tekton Pipelines:
curl -sL https://raw.githubusercontent.com/tektoncd/dashboard/main/scripts/release-installer | \
   bash -s -- install latest

kubectl wait -n tekton-pipelines \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/part-of=tekton-dashboard,app.kubernetes.io/component=dashboard \
  --timeout=90s
  • Deploy the Ingress CR to expose the Dashboard:
kubectl apply -n tekton-pipelines -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tekton-dashboard
spec:
  rules:
  - host: tekton-dashboard.$(hostname -I | awk '{print $1}').nip.io
    http:
      paths:
      - pathType: ImplementationSpecific
        backend:
          service:
            name: tekton-dashboard
            port:
              number: 9097
EOF

2. Install Kyverno in OpenShift / Kubernetes

Now it’s time to install Kyverno in our OpenShift / Kubernetes clusters. We will be using Helm for install the Kyverno components as the documentation depicts.

  • Install the Kyverno Helm repository:
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
  • Then Install Kyverno using Helm:
helm install kyverno --namespace kyverno kyverno/kyverno --create-namespace
  • Check that Kyverno pods are Running state:
kubectl get pod -n kyverno
NAME                      READY   STATUS    RESTARTS   AGE
kyverno-55f86d8cd-69pzv   1/1     Running   0          2m3s
  • Check the new CRDs that are available once the Kyverno components are installed:
kubectl api-resources | grep kyverno
clusterpolicies                       cpol               kyverno.io/v1                                 false        ClusterPolicy
clusterreportchangerequests           crcr               kyverno.io/v1alpha2                           false        ClusterReportChangeRequest
generaterequests                      gr                 kyverno.io/v1                                 true         GenerateRequest
policies                              pol                kyverno.io/v1                                 true         Policy
reportchangerequests                  rcr                kyverno.io/v1alpha2                           true         ReportChangeRequest

3. Set up the Registry credentials in the Kubernetes cluster

For this demo we’re using the GitHub registry or ghcr.io as our container registry for store the images and the signatures as well.

That’s one option but there are more Container Image Registries supported by Sigstore as you can check.

By default the container images stored in GitHub registries are private, so we need to add the registry credentials to our Kubernetes clusters in order to allow the Tekton pipelines, and the Kyverno (among others) to read/write (pull/push) container images from the GitHub Registry.

  • Export the token for the GitHub Registry / ghcr.io:
export PAT_TOKEN="xxx"
export EMAIL="xxx"
export USERNAME="rcarrata"
export NAMESPACE="workshop"

NOTE: no regular password can be used with GH Registry, a Personal Access Token needs to be generated and used to the authentication.

  • Create the namespace for the demo example:
kubectl create ns $NAMESPACE
  • Generate a docker-registry secret with the credentials for GitHub Registry to push/pull the images and signatures:
kubectl create secret docker-registry ghcr-auth-secret --docker-server=ghcr.io --docker-username=${USERNAME} --docker-email=${EMAIL}--docker-password=${PAT_TOKEN} -n ${NAMESPACE}

NOTE: this secret is located in the ${NAMESPACE} because the Tekton Pipelines will be running in this specific namespace and needs to access to the Registry to pull/push.

  • Add the imagePullSecret to the ServiceAccount “pipelines” in the namespace of the demo:
export SERVICE_ACCOUNT_NAME=pipeline
kubectl patch serviceaccount $SERVICE_ACCOUNT_NAME \
  -p "{\"imagePullSecrets\": [{\"name\": \"ghcr-auth-secret\"}]}" -n $NAMESPACE
  • On the other hand, we need to set up also the credentials in the Kyverno namespace, to allow Kyverno components to download the signature from GitHub Registry in order to verify the container image.
kubectl create secret docker-registry regcred --docker-server=ghcr.io --docker-username=${USERNAME} --docker-email=${EMAIL}--docker-password=${PAT_TOKEN} -n kyverno
  • Add the imagePullSecret to the Kyverno ServiceAccount:
kubectl patch serviceaccount kyverno \
  -p "{\"imagePullSecrets\": [{\"name\": \"regcred\"}]}" -n kyverno

NOTE: This is optional, but we’re using this to ensure that the SA of Kyverno is able to access the secret with the registry credentials.

  • Then we need to patch/update the Kyverno Deployment to include the imagePullSecrets with the Registry credentials:
kubectl get deploy kyverno -n kyverno -o yaml | grep containers -A5
      containers:
      - args:
        - --imagePullSecrets=regcred
        env:
        - name: INIT_CONFIG
          value: kyverno

NOTE: the imagePullSecrets are used as an argument for the Kyverno binary inside of the deployment/pod, assigning the secret that have the registry credentials to access to the GitHub registry container images and signatures.

Now we’re set to install and generate our first keypair with Sigstore tools.

4. Installing Cosign and Rekor

In the Sigstore project, one of the most useful tools is Cosign. Cosign supports container signing, verification and storage in an OCI registry. Cosign aims to make signatures invisible infrastructure.

Cosign supports:

  • Hardware and KMS signing
  • Bring-your-own PKI
  • Our free OIDC PKI (Fulcio)
  • Built-in binary transparency and timestamping service (Rekor)
  • Kubernetes policy enforcement
  • Rego and Cuelang integrations for policy definition

In a nutshell, Cosign is a tool within the Sigstore project that greatly simplifies how content is signed and verified by storing signatures from container images and other types in OCI registries.

Signatures produced by cosign by default are stored within the same OCI registry as another tag with a predictive name like:

ghcr.io/myuser/myimage:sha256-<DIGEST>.sig
  • Let’s install Cosign in our localhost (Fedora35 in my case):
go install github.com/sigstore/cosign/cmd/cosign@latest

NOTE: Check the Cosign Installation documentation for another installation methods / platforms.

  • Verify of one known container image with the Public Key:
cosign verify --key https://raw.githubuser
content.com/tektoncd/chains/main/tekton.pub gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.28.1

Verification for gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.28.1 --
The following checks were performed on each of these signatures:
  - The cosign claims were validated
  - The signatures were verified against the specified public key
  - Any certificates were verified against the Fulcio roots.

[{"critical":{"identity":{"docker-reference":"gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller"},"image":{"docker-manifest-digest":"sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8"},"type":"Tekton container signature"},"optional":{}}]

it works! This command returns 0 if at least one cosign formatted signature for the image is found matching the public key.

5. Generating KeyPair with Cosign

To sign content using cosign, a public/private keypair must be generated. Cosign can use keys stored in Kubernetes Secrets to so sign and verify signatures.

In order to generate a secret you have to pass cosign generate-key-pair a k8s://[NAMESPACE]/[NAME] URI specifying the namespace and secret name:

cosign generate-key-pair k8s://${NAMESPACE}/cosign

After generating the key pair, cosign will store it in a Kubernetes secret using your current context. The secret will contain the private and public keys, as well as the password to decrypt the private key.

kubectl get secret -n $NAMESPACE cosign -o yaml
apiVersion: v1
data:
  cosign.key: xxxx
  cosign.password: xxxx
  cosign.pub: xxxx
immutable: true
kind: Secret

The cosign command above prompts the user to enter the password for the private key. The user can either manually enter the password, or if the environment variable COSIGN_PASSWORD is set then it is used automatically.

When verifying an image signature using cosign verify, the key will be automatically decrypted using the password stored in the Kubernetes secret under the cosign.password field.

To check more useful tips with Cosign check the Detailed Usage documentation.

6. Signing Image with Cosign and storing the signature in the registry

  • Retrieve the Cosign Key from the Cosign secret in your k8s/ocp cluster:
kubectl get secret -n ${NAMESPACE} cosign -o jsonpath='{.data.cosign\.key}' | base64 -d > cosign.key

NOTE: this is only necessary if you’re signing images with cosing OUTSIDE of the cluster, because the Cosign private key is stored within the k8s secret.

  • Pull an example image (we’re using th Ubi8 image in this example), and tag them to use the ghcr.io registry in our Organization:
podman pull registry.access.redhat.com/ubi8/ubi-minimal:8.5-230
podman tag registry.access.redhat.com/ubi8/ubi-minimal:8.5-230 ghcr.io/${USERNAME}/ubi-minimal:8.5-230
  • Login to the GitHub registry using the PAT token:
echo $CR_PAT | podman login ghcr.io -u ${USERNAME} --password-stdin
Login Succeeded!
  • Push the image to the GitHub registry:
podman push ghcr.io/${USERNAME}/ubi-minimal:8.5-230
  • Sign a container and store the signature in the registry:
cosign sign --key cosign.key ghcr.io/rcarrata/ubi-minimal:8.5-230
Enter password for private key: 
Pushing signature to: ghcr.io/rcarrata/ubi-minimal

The cosign command above prompts the user to enter the password for the private key. The user can either manually enter the password, or if the environment variable COSIGN_PASSWORD is set then it is used automatically.

  • Signatures are uploaded to an OCI artifact stored with a predictable name. This name can be located with the cosign triangulate command:
cosign triangulate ghcr.io/rcarrata/ubi-minimal:8.5-230
ghcr.io/rcarrata/ubi-minimal:sha256-c8c13c505681f6e926cfe1cd260fd94c7414a273c528a060e2ff69aefa358a8b.sig

7. Installing Crane for inspect remote signatures

Crane is a tool for interacting with remote images and registries.

Crane have useful tips and tricks than could be helpful for interact with remote registries, and in our case we will use for inspect the signature generated by cosign and uploaded into the Registry.

  • Let’s first install Crane in our localhost (I’m using Fedora but you can use whenever installation method that it’s suitable for you).

  • Then we can use the crane manifest command with the output of the cosign triangulate (that represent the signature location within the Registry):

crane manifest $(cosign triangulate ghcr.io/rcarrata/ubi-minimal:8.5-230) | jq -r .
{
  "schemaVersion": 2,
  "mediaType": "application/vnd.oci.image.manifest.v1+json",
  "config": {
    "mediaType": "application/vnd.oci.image.config.v1+json",
    "size": 233,
    "digest": "sha256:dcab9b63d8e66e98c30876d63b7414c4d22334ff90c55e9b906b4bc36297144d"
  },
  "layers": [
    {
      "mediaType": "application/vnd.dev.cosign.simplesigning.v1+json",
      "size": 244,
      "digest": "sha256:bea87fb8bbda92538339b53614963c6d00a0de4409c05266ad3c6b1ca2dcac6d",
      "annotations": {
        "dev.cosignproject.cosign/signature": "MEUCIQCzVLWZaPQpEzvWbVDQOe/MX1h7fq+9TUP2Tpn6ZAkJaAIgKkj0EGvFfkejl2gWOCPo6Lf/1mtZ2WklmKDT5DqQIk0="
      }
    }
  ]
}

as you can check the output depicts the signature json schema when we can check the digest and the signature into the annotations as well.

8. Verify a Container Image against a public key

  • Let’s now verify our container image against a public key generated by Cosign:
cosign verify --key cosign.pub ghcr.io/rcarrata/ubi-minimal:8.5-230 | jq -r .

Verification for ghcr.io/rcarrata/ubi-minimal:8.5-230 --
The following checks were performed on each of these signatures:
  - The cosign claims were validated
  - The signatures were verified against the specified public key
  - Any certificates were verified against the Fulcio roots.
[
  {
    "critical": {
      "identity": {
        "docker-reference": "ghcr.io/rcarrata/ubi-minimal"
      },
      "image": {
        "docker-manifest-digest": "sha256:c8c13c505681f6e926cfe1cd260fd94c7414a273c528a060e2ff69aefa358a8b"
      },
      "type": "cosign container image signature"
    },
    "optional": null
  }
]

This command returns 0 if at least one cosign formatted signature for the image is found matching the public key. See the detailed usage below for information and caveats on other signature formats.

Any valid payloads are printed to stdout, in json format. Note that these signed payloads include the digest of the container image, which is how we can be sure these “detached” signatures cover the correct image.

9. Verify images in Kubernetes clusters with Kyverno

How I can ensure that in only the container images signed with our keypair are deployed and no malicious container images are used to run the pods in my K8s clusters?

We can use Kyverno, that is a policy engine designed specifically for Kubernetes.

Kyverno runs as a dynamic admission controller in a Kubernetes cluster. Kyverno receives validating and mutating admission webhook HTTP callbacks from the kube-apiserver and applies matching policies to return results that enforce admission policies or reject requests.

Policy enforcement is captured using Kubernetes events. Kyverno also reports policy violations for existing resources

Two of the many features that includes Kyverno are very useful for our use cases in this blog post:

  • verify container images for software supply chain security
  • inspect image metadata

Focusing in these use cases, the Kyverno verifyImages rule uses Cosign to verify container image signatures, attestations and more stored in an OCI registry.

The rule matches an image reference (wildcards are supported) and specifies a public key to be used to verify the signed image or attestations.

So in the first place we will use an image verification policy defined in a ClusterPolicy CR of Kyverno.

  • Let’s define the image to be verified (without any tag or SHA digest), and specify the public key generated in early steps by cosign tool:
export IMAGE="ghcr.io/rcarrata/ubi-minimal"
  • Then we need to specify a Kyverno ClusterPolicy CR that will check the container image signatures and add digests:
cat <<EOF > cpol-image-check.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: check-image
spec:
  validationFailureAction: enforce
  background: false
  webhookTimeoutSeconds: 30
  failurePolicy: Fail
  rules:
    - name: check-image
      match:
        any:
        - resources:
            kinds:
              - Pod
      verifyImages:
      - image: "$IMAGE:*"
        key: |-
$(cat cosign.pub | sed 's/^/          /')        
EOF

as you can check the validationFailureAction is enforced, and the failurePolicy is Fail by default.

Furthermore the verifyImages rule from the ClusterPolicy of Kyverno check all the container image signatures that matches, and as you notice we’re using a wildcard * to validate all the tags generated.

On the other hand also we need to specify the our public key, to be used to verify the signed image or attestations.

  • Let’s check that we are able to deploy a pod using our signed imaged from our GitHub Registry:
kubectl run myawesomeapp --image=ghcr.io/rcarrata/ubi-minimal:8.5-230
pod/myawesomeapp created
  • We can check that it works perfectly after being checked by the verifyImage rule of the Kyverno Cluster Policy validating against the public key defined in the step before:
kubectl get pod
NAME           READY   STATUS              RESTARTS   AGE
myawesomeapp   0/1     ContainerCreating   0          2s

10. Testing Kyverno Cluster Policy to ensure that ONLY our signed images can be deploy in K8s/OCP

The policy rule check fails if the signature is not found in the OCI registry, or if the image was not signed using the specified key. And because of the validationFailureAction is set to enforce, Kyverno will not allow to deploy the image to the k8s/ocp clusters.

Let’s see that in action!

  • First of all, let’s push the same image but with different tag and NOT signing them with our keypair and cosign tool:
podman pull registry.access.redhat.com/ubi8/ubi-minimal:8.4-212
podman tag registry.access.redhat.com/ubi8/ubi-minimal:8.4-212 ghcr.io/${USERNAME}/ubi-minimal:nosignedandnotsecure
podman push ghcr.io/${USERNAME}/ubi-minimal:nosignedandnotsecure
  • Let’s try to run the pod using an image that is NOT signed by cosign keypair:
kubectl run hackedpod --image=ghcr.io/${USERNAME}/ubi-minimal:nosignedandnotsecure
Error from server: admission webhook "mutate.kyverno.svc-fail" denied the request:

resource Pod/test/hackedpod was blocked due to the following policies

check-image:
  check-image: 'image verification failed for ghcr.io/rcarrata/ubi-minimal:nosignedandnotsecure:
    signature mismatch'

and voilà!

Kyverno prevented the deployment of the pod using a non signed and verified container image, and save the day (and our Software Supply Chain as well).

NOTE: this example uses Kyverno, but other Policy Engines are also compatible with Cosign such as OPA or Conaisseur.

And with that finishes the first part of the blog post around Image Signing and Verification using Sigstore and Kyverno!

Check out the second part of this blog post, where we implement a Tekton Pipeline to automate all of this process and we will check how Kyverno and Sigstore can help us to secure the software supply chain!

Happy hacking!