Kubernetes Operator
The wasmCloud operator makes it easy to run wasmCloud and WebAssembly workloads on Kubernetes.
We use the operator pattern to run wasmCloud on Kubernetes, leveraging the orchestrator to schedule wasmCloud infrastructure and workloads.
By aligning to Kubernetes, teams can adopt WebAssembly (Wasm) progressively—and integrate wasmCloud with existing tooling for ingress, registries, CI/CD, and other areas of the cloud-native ecosystem.
The wasmCloud platform on Kubernetes
Along with the wasmCloud operator, the wasmCloud platform on Kubernetes consists of these core parts:
- Custom resource definitions (CRDs) for wasmCloud infrastructure and Wasm workloads.
- wasmCloud host(s) - Sandboxed runtime environments for WebAssembly components. (By default, these are
washbinaries using thewash hostcommand to run a cluster host (washlet) that surfaces thewash-runtimeAPI over NATS.) - NATS with JetStream - CNCF project that provides a connective layer for transport between operator and hosts, along with built-in object storage through JetStream. NATS carries all control-plane traffic to the host (the host never communicates directly with the Kubernetes API). The operator sends workload start/stop requests to individual hosts via NATS subjects, and hosts self-register by publishing heartbeat messages the operator subscribes to. The Helm chart bundles NATS automatically—you can also connect an existing NATS cluster by setting
nats.enabled: falseand pointing the operator at your endpoint.
HTTP traffic reaches workloads through standard Kubernetes Services: the operator manages an EndpointSlice for each user-defined Service referenced by a workload, so cluster DNS (e.g. my-svc.default.svc.cluster.local) resolves directly to the host pods. See Expose a Workload via Kubernetes Service for the full pattern.
Earlier releases included a separate Runtime Gateway deployment that proxied HTTP traffic to workloads. The gateway is deprecated as of 2.0.3 and will be removed in a future release; routing is now handled by the operator via EndpointSlices. The chart still installs a gateway pod by default for backwards compatibility — set gateway.enabled: false to skip it.
The entire platform can be deployed with Helm using the wasmCloud operator Helm chart. (NATS and hosts can also be installed separately, if you wish.)
For a detailed breakdown of each component's responsibilities and the request flow, see the Operator Overview. For commonly-overridden chart values, see the Helm Values Reference.
Get started with the wasmCloud operator
Installation requires the following tools:
Select your Kubernetes environment:
- Existing cluster
- kind
- k3d
- k3s
If you already have a Kubernetes cluster, skip cluster creation. Verify your kubectl context is pointing to the right cluster:
kubectl cluster-infokind runs Kubernetes nodes as Docker containers.
The following command downloads a kind-config.yaml from the wasmCloud/wasmCloud repository, starts a cluster with port 80 mapped for ingress, and deletes the config upon completion:
curl -fLO https://raw.githubusercontent.com/wasmCloud/wasmCloud/refs/heads/main/deploy/kind/kind-config.yaml && kind create cluster --config=kind-config.yaml && rm kind-config.yamlk3d runs a lightweight k3s cluster inside Docker. It starts quickly and supports LoadBalancer services natively.
k3d cluster create wasmcloud --port "80:80@loadbalancer"k3s is a lightweight Kubernetes distribution. Linux only.
Install k3s:
curl -sfL https://get.k3s.io | sh -Configure kubectl to use the k3s cluster:
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER ~/.kube/configInstall the wasmCloud operator
Use Helm to install the wasmCloud operator from an OCI chart image:
- Existing cluster
- kind
- k3d
- k3s
helm install wasmcloud --version 2.0.3 oci://ghcr.io/wasmcloud/charts/runtime-operator \
-f https://raw.githubusercontent.com/wasmCloud/wasmCloud/refs/heads/main/charts/runtime-operator/values.local.yamlThe values.local.yaml file configures the host HTTP port to 80, which the kind cluster config maps to host port 80 via a NodePort Service:
helm install wasmcloud --version 2.0.3 oci://ghcr.io/wasmcloud/charts/runtime-operator \
-f https://raw.githubusercontent.com/wasmCloud/wasmCloud/refs/heads/main/charts/runtime-operator/values.local.yamlhelm install wasmcloud --version 2.0.3 oci://ghcr.io/wasmcloud/charts/runtime-operator \
-f https://raw.githubusercontent.com/wasmCloud/wasmCloud/refs/heads/main/charts/runtime-operator/values.local.yamlhelm install wasmcloud --version 2.0.3 oci://ghcr.io/wasmcloud/charts/runtime-operator \
-f https://raw.githubusercontent.com/wasmCloud/wasmCloud/refs/heads/main/charts/runtime-operator/values.local.yamlAlong with the wasmCloud operator, wasmCloud CRDs, and NATS, the Helm chart will deploy three wasmCloud hosts using the wasmcloud/wash container image.
You can build your own hosts that provide extended capabilities via host plugins.
You can find the full set of configurable values for the chart in wasmCloud/wasmCloud/charts/runtime-operator.
Verify the deployment:
kubectl get pods -l app.kubernetes.io/instance=wasmcloud -n defaultOnce all pods are running, you're ready to deploy a Wasm workload.
Deploy a Wasm component
Deploying a Wasm component that handles HTTP traffic takes two resources: a Kubernetes Service that exposes the component, and a WorkloadDeployment that references the Service. The operator creates an EndpointSlice for the Service pointing at whichever host pods are running the workload.
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
---
apiVersion: runtime.wasmcloud.dev/v1alpha1
kind: WorkloadDeployment
metadata:
name: hello-world
spec:
replicas: 1
template:
spec:
hostSelector:
hostgroup: default
kubernetes:
service:
name: hello-world
components:
- name: hello-world
image: ghcr.io/wasmcloud/components/hello-world:0.1.0
hostInterfaces:
- namespace: wasi
package: http
interfaces:
- incoming-handlerThis manifest deploys a simple "Hello world!" component that uses the wasi:http interface, scheduled on the default hostgroup and reachable inside the cluster at hello-world.default.svc.cluster.local.
For a kind-optimized NodePort variant that answers curl localhost on port 80, see the wasmCloud-hosted hello-world manifest used by the Installation guide.
Learn more about WorkloadDeployments and other wasmCloud resources in the Custom Resource Definitions (CRDs) section. For the full Service-routing walk-through, see Expose a Workload via Kubernetes Service.
Verify the component is reachable from inside the cluster:
kubectl run curl --rm -it --image=curlimages/curl --restart=Never -- \
curl -s http://hello-world.default.svc.cluster.localHello from wasmCloud!Clean up
Delete the workload deployment and its Service:
kubectl delete workloaddeployment hello-world
kubectl delete service hello-worldUninstall wasmCloud:
helm uninstall wasmcloudDelete the local Kubernetes environment:
- Existing cluster
- kind
- k3d
- k3s
No action needed — your cluster remains running.
kind delete clusterk3d cluster delete wasmcloud/usr/local/bin/k3s-uninstall.shNext steps
- Explore the
wash/examplesdirectory for more advanced Wasm component examples. - The Custom Resource Definitions (CRDs) section explains the custom resources used by wasmCloud.
- The API reference provides a complete API specification.
- For edge or resource-constrained deployments, see Lightweight Deployments for a lightweight alternative to kind.