wasmCloud CI Hardening, Cache Pre-Compilation & Benchmarking
The May 13, 2026 wasmCloud community call covers CI security hardening — including cryptographic attestations for OCI artifacts, zizmor-based workflow linting, and status gates — alongside a deep look at the new cache artifact pre-compilation pipeline that offloads CPU-intensive Cranelift AOT compilation to a dedicated process. Bailey Hayes demos the "Are We Fast Yet?" performance dashboard with Criterion and Cachegrind benchmarks, and the team discusses Pulley interpreter performance (10x vs native, compared to 60x for JS/Python interpreters), WASI TLS progress, and the path to WASI P3.
Key Takeaways
- Automated OCI artifact publishing now triggers on every example version bump, and the automated Tuesday train release ran successfully for the first time
- CI hardening complete: zizmor workflow linting, explicit job permissions, cryptographic attestations on all OCI artifacts, cargo audit, OpenSSF Scorecard, and Dependabot — all in place before the npm supply-chain scare hit this week
- Status gates pattern adopted for GitHub CI, solving the long-standing problem of required checks on jobs that only run for certain file paths
- Cache artifact pre-compilation pipeline moves Cranelift AOT compilation out of the workload host into a dedicated pre-compiler process, storing serialized cwasm in a shared cache for fast deserialization
- "Are We Fast Yet?" dashboard launched with Criterion (wall-time) and Cachegrind/Valgrind (instruction-counting) benchmarks, backed by a dedicated Hetzner AMD Ryzen 5 box and S3-backed historical data
- Pulley interpreter benchmarks show ~10x slowdown vs AOT native — dramatically better than JavaScript/Python's ~60x gap, making it viable for edge and microcontroller deployments
- WASI TLS implementation for P3 socket connections landing this week, enabling components to create TLS wrappers without routing through host HTTP handlers
- WASI P3 vote imminent: JCO reference implementation update tomorrow in WASI subgroup, followed by open discussion and final flush-out before the vote
Chapters
- 0:00 — Roadmap overview and Q2 progress
- 4:59 — CI hardening: zizmor, attestations, status gates
- 11:56 — Outgoing handler and WASI TLS implementations
- 14:20 — Cache artifact pre-compilation pipeline deep dive
- 15:49 — cwasm architecture: Cranelift, Winch, and Pulley backends
- 18:17 — Interpreted vs compiled WebAssembly performance
- 20:20 — Pulley interpreter: 10x vs native, memory footprint
- 21:19 — Async functions and Pulley limitations
- 24:00 — "Are We Fast Yet?" benchmarking dashboard demo
- 28:47 — Benchmarking infrastructure: Hetzner, S3, Criterion, Cachegrind
- 33:06 — WASI P3 status: JCO, cooperative threads, wazi SDK
- 35:11 — Hyper, sockets, and requests for P3 compatibility
Meeting Notes
CI Security Hardening and Release Automation
Bailey walked through wasmCloud's CI hardening push, which landed largely this week. The team adopted zizmor, a Rust-based GitHub Actions linting tool that flags unsafe patterns like pull_request_target — the same vector used in this week's npm supply-chain attack. All zizmor errors are now fixed and the linter runs continuously on workflow changes.
Other CI improvements include: disabling permissions by default on every workflow job with explicit per-job grants, cryptographic attestations on all OCI artifacts (previously only wash binaries), cargo audit, OpenSSF Scorecard integration, and Dependabot re-enabled.
The team also implemented a status gate pattern to solve GitHub CI's limitation where required checks fail when a job doesn't run because irrelevant files were changed. Now a gate job validates that all applicable checks passed before allowing merge, making auto-merge safe to use.
The automated train release model ran its first successful release, and OCI artifact publishing is now fully automated — any time an example gets a version bump, it publishes to OCI automatically.
Outgoing Handler and WASI TLS Implementations
Aditya presented two changes expected to merge this week and ship in next Tuesday's automated release:
- Outgoing handler trait implementation — allows custom outgoing handler implementations for the HTTP client, supporting custom CA certs and TLS configurations across both WASI P2 and P3 interfaces
- WASI TLS implementation — experimental support for TLS wrappers on WASI P3 socket connections (TCP), enabling components to interact with external TLS servers directly without routing through the host HTTP handler
Bailey noted that production usage reports for the TLS implementation will support advancing it to phase three in the WASI specification.
Cache Artifact Pre-Compilation Pipeline
Currently, when a WebAssembly component first lands on a wasmCloud host, Wasmtime compiles it in-process using Cranelift AOT compilation, producing cwasm (compiled WebAssembly). This compile step is CPU-intensive and the resulting native code lives in anonymous mmap, both of which can impact co-located workloads.
The proposed pre-compilation pipeline runs a dedicated pre-compiler instance whose sole job is to serialize WebAssembly bytes into cwasm and store them in a shared cache. The workload host then only needs to deserialize the cached cwasm — dramatically reducing startup time and CPU impact.
Key architectural consideration: cwasm is effectively machine code tied to the specific Wasmtime version, engine configuration, and host architecture. The team needs to ensure compatibility with future host types that may use different backends (Winch baseline compiler, Pulley interpreter) rather than Cranelift.
Performance Benchmarking and "Are We Fast Yet?" Dashboard
Bailey demoed the new performance dashboard, inspired by Mozilla's "Are We Fast Yet?" project. The infrastructure includes:
- Criterion benchmarks for wall-time measurement of operations like component invocation
- Cachegrind/Valgrind benchmarks for instruction counting with lower variance
- A dedicated Hetzner AMD Ryzen 5 box configured following the Rust infrastructure team's performance benchmarking best practices
- S3-backed storage with CloudFront CDN for historical benchmark data and cross-PR comparison dashboards
- Plans to run benchmarks against every release as part of the automated release pipeline
The team discussed the difference between micro benchmarks (Criterion/Valgrind for measuring specific operations) and macro benchmarks (k6-style for end-to-end network testing), with plans to add k6 macro benchmarks with Grafana dashboards in the future.
WebAssembly Runtime Performance: Cranelift vs Pulley
Frank asked about performance differences between interpreted and compiled WebAssembly execution. Bailey shared that Pulley (the Wasmtime interpreter built by Nick Fitzgerald) shows approximately 10x slowdown compared to Cranelift AOT native execution — which is remarkably good given that JavaScript and Python interpreters typically show ~60x gaps versus native code. Pulley's low memory footprint makes it particularly interesting for edge devices and microcontrollers.
Sebastien noted that Pulley currently cannot run async functions, which is a known limitation. Bailey referenced a talk by Mikhail from Mimic about running Wasmtime in isolated environments where determinism is required, which is one of the design motivations for Pulley.
WASI P3 and Ecosystem Updates
- JCO (JavaScript Component Tools) update on WASI P3 reference implementation scheduled for the next day in the WASI subgroup
- Cooperative threads making significant progress — Colin confirmed he checks the upstream repo daily
- New WASI SDK expected with cooperative thread support
- Bailey plans to re-approach the abseil atomics change to make atomics work natively rather than removing them, simplifying the C/C++ WASI story
- Discussion on hyper/reqwest compatibility with P3 — sockets implementation landed with P2 pollable async support and should compile with P3 without changes
WebAssembly News and Updates
This week's wasmCloud community call coincided with a wave of activity in the WebAssembly ecosystem. A supply-chain attack in the npm space using the pull_request_target pattern underscored why CI security tooling like zizmor matters for WebAssembly projects. The wasmCloud team had already hardened their workflows against this exact vector. Meanwhile, WASI P3 continues its march toward a vote, with the JCO reference implementation nearing completion and cooperative thread support advancing across multiple language toolchains. The Bytecode Alliance's sightglass benchmarking suite remains the canonical source for Wasmtime and Cranelift performance data, and Nick Fitzgerald's Pulley interpreter is proving that WebAssembly interpretation can close the gap with AOT compilation far more effectively than traditional scripting language interpreters.
What is wasmCloud?
wasmCloud is a CNCF project that lets you build applications using WebAssembly components and deploy them anywhere — cloud, edge, or Kubernetes clusters. It uses the WebAssembly component model to let you write business logic in any supported language (Rust, Go, Python, TypeScript, C#) while the platform handles capabilities like HTTP, messaging, and key-value storage through a pluggable provider architecture. wasmCloud's runtime is built on Wasmtime with Cranelift AOT compilation by default, and the project is actively working on support for alternative backends like the Pulley interpreter for edge deployments. With built-in OpenTelemetry observability, OCI artifact distribution, and Kubernetes integration, wasmCloud bridges the gap between WebAssembly's portable, sandboxed execution model and production cloud-native infrastructure.
Topic Deep Dive: WebAssembly Component Model
This meeting's cache artifact pre-compilation discussion directly relates to how the WebAssembly component model works in production. When a Wasm component is compiled to cwasm, the resulting native code is tied to the specific Wasmtime version, engine configuration hash, and host architecture — a tradeoff for the performance gains of AOT compilation. The component model's serialization format enables the pre-compilation pipeline pattern discussed in this call: compile once in a dedicated process, cache the result, and deserialize on every subsequent host that needs the same component. This is a significant optimization for platforms running the same component across many hosts, which is exactly wasmCloud's deployment model. The component model's engine hash validation ensures that a cached cwasm is safe to load — if the engine version or configuration changes, the cache is automatically invalidated.
Who Should Watch This
This call is particularly valuable for platform engineers evaluating CI hardening patterns for WebAssembly supply chains, SREs interested in the pre-compilation pipeline for reducing component startup latency, and WebAssembly runtime developers comparing Cranelift AOT vs Pulley interpreter performance characteristics. If you're building with the Wasm component model and care about production deployment performance, the benchmarking infrastructure walkthrough is worth watching from 24:00.
Up Next
The next wasmCloud community call will cover updates from the WASI P3 vote and the JCO reference implementation progress. Colin Murphy has a demo in the works, and the team expects the outgoing handler and WASI TLS changes to ship in the automated Tuesday release.
Get Involved
wasmCloud is a CNCF project and contributions are welcome. Join the community:
- GitHub — star the repo and check out open issues
- Slack — join the conversation
- Community Meetings — every Wednesday at 1:00 PM ET
- wasmCloud Blog — latest news and releases
Full Transcript
Read the complete transcript with speaker labels and timestamps: