Kubernetes Operator
The wasmCloud operator makes it easy to run wasmCloud and WebAssembly workloads on Kubernetes.
We use the operator pattern to run wasmCloud on Kubernetes, leveraging the orchestrator to schedule wasmCloud infrastructure and workloads.
By aligning to Kubernetes, teams can adopt WebAssembly (Wasm) progressively—and integrate wasmCloud with existing tooling for ingress, registries, CI/CD, and other areas of the cloud-native ecosystem.
The wasmCloud platform on Kubernetes
Along with the wasmCloud operator, the wasmCloud platform on Kubernetes consists of these core parts:
- Custom resource definitions (CRDs) for wasmCloud infrastructure and Wasm workloads.
- wasmCloud host(s) - Sandboxed runtime environments for WebAssembly components. (By default, these are
washbinaries using thewash hostcommand to run a cluster host (washlet) that surfaces thewash-runtimeAPI over NATS.) - Runtime Gateway - HTTP reverse proxy that routes incoming requests to the appropriate wasmCloud host based on deployed workloads. Exposed as a Kubernetes Service.
- NATS with JetStream - CNCF project that provides a connective layer for transport between operator and hosts, along with built-in object storage through JetStream. NATS carries all control-plane traffic to the host (the host never communicates directly with the Kubernetes API). The operator sends workload start/stop requests to individual hosts via NATS subjects, and hosts self-register by publishing heartbeat messages the operator subscribes to. The Helm chart bundles NATS automatically—you can also connect an existing NATS cluster by setting
nats.enabled: falseand pointing the operator at your endpoint.
The entire platform can be deployed with Helm using the wasmCloud operator Helm chart. (NATS and hosts can also be installed separately, if you wish.)
For a detailed breakdown of each component's responsibilities and the request flow, see the Operator and Gateway Overview.
Get started with the wasmCloud operator
Installation requires the following tools:
Select your Kubernetes environment:
- Existing cluster
- kind
- k3d
- k3s
If you already have a Kubernetes cluster, skip cluster creation. Verify your kubectl context is pointing to the right cluster:
kubectl cluster-infokind runs Kubernetes nodes as Docker containers.
The following command downloads a kind-config.yaml from the wasmCloud/wasmCloud repository, starts a cluster with port 80 mapped for ingress, and deletes the config upon completion:
curl -fLO https://raw.githubusercontent.com/wasmCloud/wasmCloud/refs/heads/main/deploy/kind/kind-config.yaml && kind create cluster --config=kind-config.yaml && rm kind-config.yamlk3d runs a lightweight k3s cluster inside Docker. It starts quickly and supports LoadBalancer services natively.
k3d cluster create wasmcloud --port "80:80@loadbalancer"k3s is a lightweight Kubernetes distribution. Linux only.
Install k3s:
curl -sfL https://get.k3s.io | sh -Configure kubectl to use the k3s cluster:
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER ~/.kube/configInstall the wasmCloud operator
Use Helm to install the wasmCloud operator from an OCI chart image:
- Existing cluster
- kind
- k3d
- k3s
helm install wasmcloud --version v2.0.1 oci://ghcr.io/wasmcloud/charts/runtime-operator \
-f https://raw.githubusercontent.com/wasmCloud/wasmCloud/refs/heads/main/charts/runtime-operator/values.local.yaml \
--set gateway.service.type=LoadBalancerThe values.local.yaml file configures the Runtime Gateway as a NodePort service on port 30950, which the kind cluster config maps to host port 80:
helm install wasmcloud --version 2.0.1 oci://ghcr.io/wasmcloud/charts/runtime-operator \
-f https://raw.githubusercontent.com/wasmCloud/wasmCloud/refs/heads/main/charts/runtime-operator/values.local.yamlk3d supports LoadBalancer services natively, so we override the gateway service type:
helm install wasmcloud --version 2.0.1 oci://ghcr.io/wasmcloud/charts/runtime-operator \
-f https://raw.githubusercontent.com/wasmCloud/wasmCloud/refs/heads/main/charts/runtime-operator/values.local.yaml \
--set gateway.service.type=LoadBalancerk3s includes a built-in load balancer (Klipper), so we override the gateway service type:
helm install wasmcloud --version 2.0.1 oci://ghcr.io/wasmcloud/charts/runtime-operator \
-f https://raw.githubusercontent.com/wasmCloud/wasmCloud/refs/heads/main/charts/runtime-operator/values.local.yaml \
--set gateway.service.type=LoadBalancerAlong with the wasmCloud operator, Runtime Gateway, wasmCloud CRDs, and NATS, the Helm chart will deploy three wasmCloud hosts using the wasmcloud/wash container image.
You can build your own hosts that provide extended capabilities via host plugins.
You can find the full set of configurable values for the chart in wasmCloud/wasmCloud/charts/runtime-operator.
Verify the deployment:
kubectl get pods -l app.kubernetes.io/instance=wasmcloud -n defaultOnce all pods are running, you're ready to deploy a Wasm workload.
Deploy a Wasm component
Use a WorkloadDeployment manifest to deploy a Wasm component workload to your cluster:
apiVersion: runtime.wasmcloud.dev/v1alpha1
kind: WorkloadDeployment
metadata:
name: hello-world
spec:
replicas: 1
template:
spec:
hostSelector:
hostgroup: default
components:
- name: hello-world
image: ghcr.io/wasmcloud/components/hello-world:0.1.0
hostInterfaces:
- namespace: wasi
package: http
interfaces:
- incoming-handler
config:
host: localhostThis manifest deploys a simple "Hello world!" component that uses the wasi:http interface to the default hostgroup, making it available to call via HTTP from outside the cluster.
You can deploy from the wasmCloud-hosted manifest with this kubectl command:
kubectl apply -f https://raw.githubusercontent.com/wasmCloud/wasmCloud/refs/heads/main/examples/http-hello-world/manifests/workloaddeployment.yamlLearn more about WorkloadDeployments and other wasmCloud resources in the Custom Resource Definitions (CRDs) section.
Now you can use curl to invoke the component with an HTTP request:
curl localhost -iHello from wasmCloud!Clean up
Delete the workload deployment:
kubectl delete workloaddeployment hello-worldUninstall wasmCloud:
helm uninstall wasmcloudDelete the local Kubernetes environment:
- Existing cluster
- kind
- k3d
- k3s
No action needed — your cluster remains running.
kind delete clusterk3d cluster delete wasmcloud/usr/local/bin/k3s-uninstall.shNext steps
- Explore the
wash/examplesdirectory for more advanced Wasm component examples. - The Custom Resource Definitions (CRDs) section explains the custom resources used by wasmCloud.
- The API reference provides a complete API specification.
- For edge or resource-constrained deployments, see Lightweight Deployments for a lightweight alternative to kind.