WasmCon NA 2025 Liveblog

Welcome to our liveblog of WasmCon at KubeCon + CloudNativeCon NA 2025 in Atlanta!
Welcome + Opening Remarks
Bailey Hayes, Cosmonic
Cosmonic CTO, W3C WebAssembly WASI Subgroup co-chair, and Bytecode Alliance At-Large Director Bailey Hayes opened WasmCon with a talk on WebAssembly's evolution from browser technology to universal runtime, including the major milestones from this year such as WebAssembly's 10th birthday and the growing ecosystem of language support.
Then she looked ahead to WASI P3 with some of the possibilities that it will unlock:
- Language-integrated concurrency
- Composable concurrency
- High-performance streaming
To help look ahead to WASI P3, Bailey introduced the first talk by Luke Wagner.
Does the Component Model require extra copying?
Luke Wagner, Fastly
First, Luke reviewed the reasons you might use Wasm:
- Size - run on smaller devices, or get cost savings on larger devices.
- Cold start - more aggressively scale to zero
- Portability - Shift workloads dynamically to lower power, lower cost CPIs
- Sandboxing - Easily sandbox guest code with explicitly allowed APIs
Components take these features further with...
- Free SDKs in the WIT IDL
- Virtual platform layering
- JS glue code for free
- Secure polyglot packages
- Modularity
But at what cost? Does it take extra copying to achieve these benefits?
In an HTTP request interface, we might define an http-handler interface that we import and export, and a response might be copied to resources for both of those. That's redundant. But it could be addressed with resource types, enabling us to create resources that might be shared as needed.
WASI 0.3 adds async functions and stream and future types for use in function signatures. Luke focused on streams as a solution to the copying problem, since streams can be passed around as needed.
Looking farther ahead, Luke discussed a minimalist memory mapping approach that may be promising post-1.0. This would allow zero-copy passing while preserving the shared-nothing property of the component model.
In conclusion, Luke reviewed how much of the copying that occurs in the Component Model needs to happen anyway, and the problematic copying that remains has promising solutions at varying stages of development. So to answer the question in the title:
"Does the Component Model require extra copying?"
No.
...mostly
...but it's worth it
...and it'll get better over time.
Whamm: A Framework for Performant, Sandboxed Instrumentation
Elizabeth Gilbert, Carnegie Mellon
Instrumentation injects logic into code to perform some operation--frequently observability. You could also use this to, say, validate that code follows security policies.
So what if we could write a tool once and it ports by design across domains? Whamm is an answer to that question, attempting to reduce ecosystem fragmentation. Wasm is a good fit because it's polyglot and is a managed language.
Elizabeth walked through hot paths in a Wasm program when using Whamm, the Whamm framework architecture, and a demo of Whamm in action. In the demo, Elizabeth ran Wasm instrumented with Whamm to generate a call graph visualization.
Then she discussed the optimizations that make the system performant, including the importance of wei optimizations.
In the future, Whamm may be able to target non-Wasm events for use-cases like eBPF.
There are also plans to observe chained service dependencies, great for enforcing security policies, validating data provenance, and more.
Lightning Talk: AI at the Edge: ONNX Inference in Wasm on Featherweight K0s
Prashant Ramhit, Mirantis
Prashant discussed an approach to deploying ONNX machine learning models via WebAssembly on lightweight k0s Kubernetes clusters. The project started with a desire to take ocean metrics from a submarine buoy. They used a stack of a Raspberry Pi, k0s, and WasmEdge runtime, plus the ONNX inference engine. With this stack, a fleet of buoys are monitoring their environment with inferencing at the edge.
Content Authenticity Initiative Trustmark With WASI WebGPU
Colin Murphy, Adobe & Mendy Berger, Cosmonic
Colin works on the Content Authenticity Initiative (CAI) team at Adobe. This team tries to establish content provenance and authenticity, and they're doing so with invisible watermarks applied using WebGPU.
Colin and Mendy demonstrated how this could be achieved using a Wasm component that runs on wasmCloud and utilizes wasi:webgpu from the WASI-GFX proposal, which enables graphics and GPU functionality outside the browser, so the same component can be transpiled and used in the browser. Mendy and Colin showed the encoding of images using the open-source Trustmark watermark system in both the browser and CDN edge compute.
Si Vis Pacem, Adde Rete—If You Want Peace, Add a Mesh!
Flynn, Buoyant & Bailey Hayes, Cosmonic
Flynn broke down how Linkerd works to provide security, reliability, and observability via service mesh. Bailey demonstrated how wasmCloud can deploy a componentized MCP server that makes requests against a Swagger petstore API server.
In their demo, they showed how to...
- Install Linkerd
- Use Bytecode Alliance tooling that bundles TypeScript into a .wasm
- Build the petstore demo with wash dev
- Deploying the petstore demo with wasmCloud
- Getting HTTP metrics from Linkerd via the mesh
Then they discussed some potential gotchas:
- NATS must be marked opaque in Linkerd
- Wasm apps are sandboxed -- this is generally a good thing, since that's really useful, but it's useful to remember
Serverless on Kubernetes: Wasm, Knative, or Regular Autoscaling – What Actually Works?
David Pech, Wrike
David discussed autoscaling, comparing the three different approaches in the title. Framing the comparison, he discussed several different scaling metrics. Should you scale on average memory? This is ultimately going to be inappropriate (for reasons such as garbage collection considerations in Ruby). What about average CPU? This is going to be too flaky. What about scaling on an app-specific metric? Selecting the best scaling metrics seems easy at first, but it's surprisingly tricky. Vs. one separate process per request is not a bad idea.
Based on his experimentation with Wasm and Knative, David came to the following conclusions:
- Knative was relatively straightforward and required a service mesh.
- David found using Kwasm frictionful--he found the "Server-side WebAssembly" book helpful.
Lightning Talk: What If the Runtime Was Portable Too? Self-Hosted Runtime Capabilities in Wasm
Yuki Nakata, SAKURA Internet Inc.
Yuki reviewed the portability of Wasm, enabling users to write once and run anywhere. But there is a challenge in ensuring that a given runtime supports the same capabilities and features as others. So Yuki poses the question of how to implement capabilities and features neutrally.
His approach is a minmal Wasm-ised Wasm runtime to facilitate runtime compatibility. He discussed three use cases:
- Cross-runtime checkpoint/restore
- Ttracing and instrumentation
- Executing runtime unsupported features
This approach does come with performance overhead, increasing bytecode instructions executed on the host runtime. Yuki discussed optimization techniques for the self-hosted runtime including merging instructions and pass-through WASI implementation.
This work is available in the Chiwawa project on GitHub.
Lightning Talk: Composable, Polyglot Concurrency with WASIp3
Thorsten Hans & Karthik Ganeshram, Fermyon Technologies, Inc.
Thorsten and Karthik demonstrated a service that uppercases an incoming body and returns it. They ran the component on Spin, and got the text from the message body back in all-caps. Another middleware component can compress and decompress data depending on the invoked function. This can be composed with the original component to create a single component. WASI P3 brings Async function ABI. Source code is available at https://github.com/fermyon/wasmcon-2025-demo
Conclusion
Thanks for checking out our liveblog! If you're in Atlanta, make sure to visit the wasmCloud kiosk (4A) in the Project Pavilion on the Solutions Showcase floor (Building B, Level 1, Exhibit Hall B3-B5). Hope to see you there!
