Skip to main content

Integrating SPIFFE in your environment

You can use SPIFFE Workload API with your applications running on Kubernetes in different ways. SPIRL provides two main approaches for integrating SPIFFE Workload API:

Integration Approaches

ApproachWhen to UseProsCons
Admission ControllerMost use cases, especially for new applicationsSimple setup, automatic injection, consistent across namespaceLess control over specific configurations
Manual ConfigurationWhen you need fine-grained control or can't use admission controllerFull control over volumes and mounts, works with any admission controller setupMore verbose, requires manual maintenance

Choose the approach that best fits your needs. For most users, the admission controller approach is recommended.

SPIRL Admission Controller

The easiest way to use SPIFFE Workload API with your applications running on Kubernetes is by using SPIRL Admission Controller.

Just add the k8s.spirl.com/spiffe-csi: enabled label to your Pod and SPIRL Admission Controller will inject SPIFFE Workload API into your Pod's containers and set the SPIFFE_ENDPOINT_SOCKET environment variable with the path to SPIFFE Workload API socket.

Here is a quick example of how to use SPIRL Admission Controller with your Pod:

apiVersion: apps/v1
kind: Deployment
metadata:
name: admission-controller-demo
namespace: spiffe-demo
spec:
replicas: 1
selector:
matchLabels:
app: admission-controller-demo
template:
metadata:
labels:
app: admission-controller-demo
k8s.spirl.com/spiffe-csi: enabled
spec:
serviceAccountName: admission-controller-demo
containers:
- name: admission-controller-demo
image: ghcr.io/elinesterov/spiffe-demo-app:v0.2.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080

How it works

When you installed SPIRL components in your Kubernetes cluster, you also installed SPIRL Admission Controller and SPIFFE CSI Driver. SPIRL Admission Controller is a Kubernetes Admission Controller that intercepts all the requests to create or update Pods and injects SPIFFE Workload API into containers that have thek8s.spirl.com/spiffe-csi: enabled label.

To add the SPIFFE Workload API to the Pod, SPIRL Admission Controller adds a CSI volume to the Pod that is mounted into each container under /spiffe-workload-api/. The agent.sock file inside that directory hosts the SPIFFE Workload API. Therefore the full path to the SPIFFE Workload API socket provided by SPIRL inside each container will be /spiffe-workload-api/agent.sock.

To help SPIFFE-aware applications find the SPIFFE Workload API socket, SPIRL Admission Controller also adds the SPIFFE_ENDPOINT_SOCKET environment variable to each container set to the path of the SPIFFE Workload API socket.

Webhook Configuration Options

The SPIRL Admission Controller webhook can be configured to target resources using different selector strategies:

Object Selector (Default)

By default, SPIRL uses an objectSelector that targets pods with the k8s.spirl.com/spiffe-csi: enabled label:

webhook:
namespaceSelector: {}
objectSelector:
matchLabels:
k8s.spirl.com/spiffe-csi: enabled

Namespace Selector

Alternatively, you can configure it to target all pods within specific namespaces using a namespaceSelector:

webhook:
namespaceSelector:
matchLabels:
k8s.spirl.com/spiffe-csi: enabled
objectSelector: {}

Then label your namespace:

kubectl label namespace my-namespace k8s.spirl.com/spiffe-csi=enabled

Or using YAML:

apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
labels:
k8s.spirl.com/spiffe-csi: enabled

With this configuration, all pods created in the labeled namespace will automatically have the SPIFFE CSI driver injected, regardless of their parent workload type (Deployments, StatefulSets, DaemonSets, etc.).

Combined Selectors

You can also use both selectors together for fine-grained control. Both conditions must be satisfied (AND logic):

webhook:
namespaceSelector:
matchLabels:
k8s.spirl.com/spiffe-csi: enabled
objectSelector:
matchLabels:
app: my-specific-app

This configuration would only inject SPIFFE for workloads that are both in a namespace labeled k8s.spirl.com/spiffe-csi: enabled AND have pods labeled app: my-specific-app.

Manual SPIFFE Workload API Configuration

If you prefer not to use the admission controller or need more control over the SPIFFE integration, you can manually configure the CSI driver in your pod specifications. This approach gives you full control over the SPIFFE Workload API setup.

SPIRL Agent provides the SPIFFE Workload API socket on each Kubernetes node under the default path of /run/spirl/sockets/agent.sock. That path can be mounted directly inside each pod, but this option is not recommended in production environments. The recommended way is to use SPIFFE CSI Driver to mount SPIFFE Workload API socket into your Pod.

First, you'll need to add a volume to your Pod:

  volumes:
- csi:
driver: csi.spiffe.io
readOnly: true
name: spiffe-csi-driver-volume

Second, add a volume mount to your container:

  volumeMounts:
- mountPath: /spiffe-workload-api/
name: spiffe-csi-driver-volume

Make sure you are using the same name here as used in the Volume. The SPIFFE Workload API socket will be available inside your container at /spiffe-workload-api/agent.sock.

Finally, add an environment variable to your container. The SPIFFE_ENDPOINT_SOCKET value is used by SPIFFE-aware applications to locate the SPIFFE Workload API socket:

  env:
- name: SPIFFE_ENDPOINT_SOCKET
value: "unix:///spiffe-workload-api/agent.sock"

Here is a complete manual example using a Kubernetes Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
name: manual-spiffe-demo
namespace: spiffe-demo
spec:
replicas: 1
selector:
matchLabels:
app: manual-spiffe-demo
template:
metadata:
labels:
app: manual-spiffe-demo
spec:
serviceAccountName: manual-spiffe-demo
containers:
- name: manual-spiffe-demo
image: ghcr.io/elinesterov/spiffe-demo-app:v0.2.1
imagePullPolicy: IfNotPresent
env:
- name: SPIFFE_ENDPOINT_SOCKET
value: "unix:///spiffe-workload-api/agent.sock"
ports:
- containerPort: 8080
volumeMounts:
- name: spiffe-csi-driver-volume
mountPath: /spiffe-workload-api
volumes:
- name: spiffe-csi-driver-volume
csi:
driver: "csi.spiffe.io"
readOnly: true