Transcript: TypeScript Templates, Component Interposition Middleware & A C-Advisor for WebAssembly
wasmCloud Weekly Community Call — Wed, February 4, 2026 · 58m 22s
Speakers: Bailey Hayes, Eric, Elizabeth Gilbert, Marcin Ziółkowski, Lucas Fontes, Liam Randall, Frank Schaffa, Jeremy Fleitz, Colin Murphy, ossfellow
Full Transcript
Bailey Hayes 02:12
I've been snowed in. It finally cleared up yesterday — finally got out and did a grocery run after being stuck for like, we counted eight days. They couldn't plow our street, it's on a hill. While our neighbor was trying to plow, my other neighbor went out on his skis with his dog. We've got a good agenda today. We're getting close on a lot of different milestones — both P3 and v2. I also have my mom's dog. His name is Cosmo, after her favorite drink. She saw all the snow coming last Friday, dropped him off here, and got the last plane out to Miami.
Bailey Hayes 04:32
Welcome to wasmCloud Wednesday, first one of the month of February. We've got two demos today, and we're going to run through a series of updates on wash v2 — including procedures for transitioning from the wash repo over to the wasmcloud/wasmcloud repo. But first, let's talk about what we're working on right now: exercising v2 in a variety of ways. One of those is making sure we've got templates so other people can build with them. Eric created a set for TypeScript. Eric — take it away.
Eric 05:24
Background — we're in the last stretch for wash v2 and for documenting wash v2. A big part of that is templates that users can use to create new applications, and examples that show how to build with wash v2. Several maintainers got together and talked about conventions around these examples — to make them more usable.
Philosophically we want these — especially the templates — to be:
- Robust — enough functionality to build real apps
- Approachable — standard and idiomatic, using
stdfor Rust, JCOstdand standard web APIs likefetchfor TypeScript, keeping generated bindings under the surface
We developed a couple of conventions:
- Explicit names based on interfaces. Three examples here:
http-hello-world(HTTP interface, minimal hello world),http-client, andhttp-service-hono. Each name tells you exactly what interface it's demonstrating and what it does config.yamlfiles for wash that enable automatic dev and build upon download —wash newpointing at the Git repo just works- Each template has its own
.gitignore— standardized but differing by language toolchain, excluding WIT dependencies and generated bindings; templates fetch deps and regenerate on dev or build - Standardized
world.witpackage and world names — packagewasmcloud:templates, world named for the language (e.g.,typescript). Brings a little order - README required — usability and approachability
For HTTP Hello World — maybe the exception on robustness because it's designed for a quick start. All it's doing is returning "Hello from TypeScript" via the Fetch API. Very simple — nine lines of code, really seven. Approachable and easy to understand.
Let's look at one that's more robust — our new TypeScript HTTP client template. We're using the Fetch API. When we run wash dev, we fetch WIT deps, generate bindings, and we're running.
Eric 09:36
This component makes outgoing HTTP requests using the standard Fetch API. Several endpoints — one for making requests, one for proxying request and returning formatted JSON, one for proxying and returning response headers. Various HTTP methods. URL forwarding to a specified URL — defaults to HTTPBin. Quick example: doing our basic HTTP client stuff. Hopefully gives users a scaffold to start with TypeScript in a really standard, understandable way.
Bailey Hayes 10:34
Will you talk about the difference between templates and examples?
Eric 10:41
Templates are meant to be robust scaffolded starting points for developers — a utility for a developer. Examples might be more fleshed out — a flashy demonstration of a particular combination of functionalities that shows off what you can do with wasmCloud. But templates are designed for developers to get their hooks in and start doing meaningful work.
Bailey Hayes 11:22
Eric has added this new templates directory as part of his PR to our TypeScript repo. We need to do the same for our Go repo, and Lucas has been working on expanding Rust templates and examples in the monorepo. Monorepo is a cargo workspace, so only cargo examples there — examples and templates. We also aim to have it where if there's a template in TypeScript, we'll also have a template with the same name in Rust — so they have common names. You can guess what they'll be without having to look it up.
wash v2 RC7 is out — and Eric revved all our docs. Please give us feedback if the docs aren't lining up with what you expect. Eric going through the docs revving them for RC6 surfaced some bugs that we needed to fix — those are in. Now it looks like what we want. We're feature complete, so we're entering hardening, stabilization, and improving developer ergonomics by providing templates and examples.
You might have seen Victor's been revving wasmCloud v1 over on the wasmcloud/wasmcloud side of the world. Until we were ready to cut v2, we didn't want to rock those two views of the world. Right now Victor is cutting what we think is basically the last v1 wash release. Once that happens we're ready to start the reconciliation effort — probably next week. What that looks like: we move the body of what's in wasmcloud/wasmcloud v1 to its own repo, take what's currently in the wash repo and put that in wasmcloud/wasmcloud, redo the CI, republish, and then we might launch. I'll make a call in Slack once we start this reconciliation, because we'll hit pause on cutting release candidates and v1 until we finish.
Bailey Hayes 15:04
Just runtime to answer Lucas's question in chat — I don't think we'll call it "wash runtime."
ossfellow 15:14
That didn't work. Engine, runtime, something like that. Because for those of us who've been using the platform for a while, wash is always synonymous with the CLI — it sounds odd when we say "wash runtime."
Bailey Hayes 15:40
Makes sense. At one point wash the CLI was also going to always have its own runtime for dev and plugins. Where we landed: a consolidated runtime crate could also be a great name. I want to highlight issue of the week — late-breaking, but I put this in two weeks ago and Jeremy started working on implementation. There's a draft PR up — more he wants to change. Jeremy, want to say anything about it?
Jeremy Fleitz 16:31
I've been working on this for the past week — looking forward to getting it ready for a demo next wasmCloud Wednesday. It basically provides from your wash component to do OpenTelemetry on the inside — goes down, uses the wash-runtime host, then uses the exact same connection for your telemetry.
Bailey Hayes 17:00
Hey Jeremy, do I have to use wasi-otel to do OTel on the host at all? Like if I'm just writing a wasi-http component, do I have to use wasi-otel?
Jeremy Fleitz 17:16
No, you don't. By default it's off, because OpenTelemetry can be noisy.
Bailey Hayes 17:24
When it's really useful is if you want to enrich the context of the OTel that you're emitting. We're able to do a lot with the built-in plugins into our hosts and have OTel basically wired up. But if you want to do special things inside your component, that's when you'd import wasi-otel.
Aditya has a question in chat — are we planning on including gRPC support in the v2 release or for the next point release? Liam, Lucas — basically running point on reviewing PRs and getting things landed. Where we're at: it could definitely be an additive change, non-breaking, so a good candidate for being scoped out of MVP — but also something we definitely want in.
Lucas Fontes 18:35
To add — the overall structure of the pull request is good. It's really that it drifted from what we had before. One of the challenges was related to the fixtures or stuff we were bringing in related to Wasmtime/WASI. If we can timebox that to next week and figure it out, let's get it on to v2. The issue with Wasmtime/WASI is going to take longer to resolve — we'll either wait or bite the bullet and implement with the test infrastructure we have. So yes, we want gRPC there, would love to have it for v2 proper. But also, if things get too complicated, it's okay to come in a point release.
Bailey Hayes 20:06
Probably sounds like we need some pairing time to see what it would take. I'd love to give a WASI update. Yesterday I released WASI 0.2.10, which could possibly be the last WASI P2 release. Later this week I'm planning to cut the next WASI P3 release candidate.
Over the past month, the WASI P3 RC from January 6 has rolled out to a bunch of places — wasi-sdk, Wasmtime, it's in the latest Wasmtime release, all over. We've revved the toolchain to work with that. That's the first RC that included basically the completed component model ABI. We're not expecting more changes on that front. Once I cut the RC this week, that might be the one. Might not be. We're not going to make changes at the ABI level. It might be tweaks at the WASI interface level for ergonomics around async and features — right now we're iterating in the file system space. Once that's done, we're looking at a vote as early as March.
Without further ado — Elizabeth, who made a pretty cool demo on top of the release that went out in early January.
Elizabeth Gilbert 22:23
This is the repo — feel free to take a look. What it's doing is something called component interposition. Taking a service running on WASI P3 and interposing this middleware between the HTTP call and hitting the service.
Originally an incoming HTTP request would just hit the service and the service would respond. What we're doing is interposing middleware — when this request comes in, it hits the middleware first, which continues on into the service call, and then flows back out to the middleware, and then out to respond.
I've got this working with multiple middlewares: hit your first middleware, your second, on into your Nth, then hit your service, then flow back out the same way. Like an onion — M1 is the outermost layer going into the service at the core, and falling back out.
A lot of details in the repo to help solidify what's going on at the low level — digs into the WAT if you're curious about that level of detail. Not everyone needs that, but if you want to do more complex scenarios, this will be very helpful because it contextualizes the WAT of not just the service but also the middleware, why the world looks the way it does, and what the full composition WAT looks like.
I have a really basic script — if you run it, it builds your middleware, builds your service, does the composition, and runs the composed component. So this single-middleware case: "entered the middleware," then it hit the service, then flowed back out. Looking at the code — this is the HTTP handler. It logs entering the middleware, then hands the request directly to whatever has been hooked up, which in this case is the service. But note the middleware doesn't know that — it's just passing along to the next handler in the chain, which we've hooked up to the service. The response flows back out the same way.
For multiple middlewares, you switch to multi-level tiering — three different middlewares (A, B, C). They do really basic things just to demonstrate. Now you have the onion: outermost A → B → C → service → flow back out.
For the next iteration: I'm wanting to splice middleware between two services that have been configured to directly communicate with each other. So you have service A and it's directly passing information to service B through HTTP — but not through HTTP, literally calling into service B's exported component functions. If that's happening, this structure doesn't directly work because we are composing middleware with a single service that has the service exports. So you'd have to splice middleware between two services that have been composed together. That's what I'm going to be doing next.
Frank Schaffa 28:25
This reminds me of AOP — Aspect-Oriented Programming.
Elizabeth Gilbert 28:38
Exactly — what are they called, point cuts. The HTTP call is the point cut you're interposing on in AOP.
Bailey Hayes 28:51
Similar to a lot of approaches. We're calling it service chaining for what it's worth. This is a specific specialization where we have a world that says "I'm trying to be a middleware here," and another that says "I'm the client," and another that says "I'm the service, the main app." The pattern of being able to take wasi-http with wasi-http with wasi-http — that's an HTTP service chain. And because we're calling our WIT interfaces, we're not dropping down into a network stack.
Frank Schaffa 29:30
We were thinking from a use-cases point of view — how to inject debugging information or tracing so it goes back into OTel. This is nice.
Elizabeth Gilbert 29:53
That's where we wanted to go. This is the basic demonstration that we can do the thing — and now we can do really interesting things with this. Observability, security protocols. You can do whatever you want because you have full control over incoming and outgoing requests.
Frank Schaffa 30:17
Are you going to spec out failure modes?
Elizabeth Gilbert 30:26
There would need to be some type of guarantee that whatever the middlewares are doing, they're adhering to whatever modes are guaranteed. This would be more of a platform-type thing, but it could also be done in user space.
Bailey Hayes 30:44
In this case we actually pass around a result — that's where you'd pass whether an error occurred in the middleware chain. But I imagine you're also probably thinking like I was: this can replace what we do with sidecars. It could do a lot of other things — service breaker injection — basically anything you'd today put in a heavyweight envoy sidecar. Probably stuff it into this type of interface, which would be really powerful.
Frank Schaffa 31:20
From a guardrails point of view — if you think in BPF that a function cannot live more than a certain period of time, nothing really takes over the thread here?
Bailey Hayes 31:38
A lot of that control has to happen at the host layer, effectively, and all of that will be in our wasi-http plugin implementation in our host.
Frank Schaffa 31:58
Are you thinking in terms of a language for how you compose those middlewares?
Bailey Hayes 32:05
Yeah — that's wac.
Elizabeth Gilbert 32:10
It's using wac. A DSL for composing components. Pretty readable. You're instantiating your service component, then this exports what is imported by the middleware. Looking at the middleware WIT — middleware imports a handler and exports a handler. That's the secret sauce. It's importing whatever the downstream call is. When it exports the handler, it's saying "I'm also exporting a handle function." That's how it does the service-level chaining.
In the wac: instantiating the service, plugging the service's export handler into the middleware's import. We're chaining them together, then exporting the handler from this new chained middleware instance. For multiple, it does that N times — take the service, plug into innermost middleware, middle, outermost, then export from the outer. That's why A comes first: enter A → B → C → service → flow back out.
Frank Schaffa 34:15
Could have a graphical tool for composing this.
Elizabeth Gilbert 34:20
Yeah, you could do it graphically. You can also generate this — from a user standpoint, you don't really need to know these low-level details, because it's just chaining services together. You can make this configurable, then we generate the wac and run the wac on the component so it just works magically.
Bailey Hayes 34:48
One of the goals of wac itself is to be machine-readable and writable. You can imagine generating this. Components are declarative, very clear about how they're linked. If somebody was really cool, they could make a tool that creates an ASCII mermaid diagram of how components are linked together. Would make me very, very happy.
Awesome — thank you, Elizabeth. We also have somebody else who wants to jump in and share a project. Marcin, are you able to share?
Marcin Ziółkowski 36:16
I have never used Zoom in my life before — this is new. Thanks for the opportunity. What I'm doing is building a C-advisor for WebAssembly. When I first heard about WebAssembly a year ago as part of my PhD, I heard wild claims — Wasm is faster, smaller memory footprint. I wanted to test that.
What we did in Orange Innovation Poland: I compiled some of the services we use internally to WebAssembly, and I wanted to test them. I very quickly realized there wasn't enough — I was looking for a cadvisor equivalent. I wanted to run my services and see how much resource consumption they actually need, comparing to containers, or even bare metal. I didn't find anything like that.
If you start with the runtimes — Wasmtime will point you toward perf. That's good, but not what I wanted. I didn't want to use the wrapper. I wanted the metrics independently of the lifecycle of the service. wasmCloud mentions metrics, but these aren't anything pointing toward resource consumption. WasmEdge does something called gas, but that's more or less CPU instructions equivalent.
So I did what I could. C-advisor reads from /sys/fs/cgroup directories. That's good for containers — they isolate in cgroups in their own namespace. Wasm doesn't really do that. What that means: right now what I do in my article and what we do in Orange internally is what ps does — take the PID of a running Wasm instance (whatever has the Wasmtime/WasmEdge runtime), then get CPU and memory usage and other stats for the process. That includes both runtime and the Wasm instance.
That's a start, but it's not what I want — I'd like to read only the resource usage of the instance itself. I've looked at eBPF, I've looked at a few different options, but nothing is as clean as I was hoping. I'm looking for a better information source.
Bailey Hayes 39:49
Elizabeth — curious if Ben has anything in his space for measuring Wizard as well. Where my head's at for this type of problem: very much OTel with a convention around semantic conventions in OpenTelemetry. You're looking at v1; we've been very close to launching v2. Last week (or two weeks ago) we demoed OTel support — Lucas demoed it within our host. That has some of this, but it doesn't have what you're looking for yet, but it's definitely something we want to add.
There's also the weird outcome — Aditya called it: the observer effect. When you start measuring something, it gets slower; you don't actually get real data. Super true if you do gas instrumentation. The folks using gas are getting throttled because they're using it to deal with money — the crypto environment is where gas is getting used, and they have to be very correct and deterministic. Gas is also implemented in different ways in different runtimes. Just measuring off gas in Wasmtime probably won't give you the same result as in WasmEdge. If you did CPU instruction counting from just Wasm instruction calls, maybe those should be the same.
Elizabeth Gilbert 41:56
I'm looking at Wizard — the CLI doesn't say what the different metrics are. Wizard is a specific Wasm engine — a research engine for WebAssembly. Newer ideas are implemented to experiment with them. It has nice capabilities for dynamic instrumentation and metrics around the engine for research reasons. I'm trying to find the different metrics available. Let me look in the source code.
Bailey Hayes 42:46
Also kind of cool to connect you all. Marcin, Elizabeth is also a PhD student.
Marcin Ziółkowski 43:00
Lucky to run into you — I've watched one of your talks. cAdvisor was merged into kubelet, and wasmCloud hosts are similar to kubelet in some ways. Going back to your talk Bailey about the density of services you could run with Wasm and Kubernetes — I'm still needing that killer measurement data to show for example inside Orange, for our guys to be like "yeah sure, let's invest X months into compiling everything we have into Wasm." I'm surprised this hasn't been done yet.
Lucas Fontes 44:13
There's plenty of good stuff here. The angle of looking at WebAssembly and saying "I want to be the C-advisor of WebAssembly" — the fastest path would be to make WebAssembly behave like a cgroup normally, then use the same paths you already have in C-advisor. The challenge: most runtimes are heavily async, meaning multi-threaded, meaning they cannot pin each thread to a specific cgroup to guarantee they'll be living in just that zone. There's a dude that documented the challenges. The first thing you have to do is make the runtime have the ability to pin a component to a specific OS thread — this is what this guy here is solving. Once you do that, adding the thread to a cgroup is trivial.
This could be put together. If we get a transcript of this call and tell Claude the things I said in the past 30 seconds, it would be able to get that put together. It's not really that complex.
The other angle — does WebAssembly even need something like C-advisor? Everything we do in WebAssembly is already starting with limits — they have limits baked in the ABI itself. When we give a component a section of memory, we tell it "here's your memory area, you can grow this up to 1 MB" — if you go above that, it doesn't matter, I'm not giving it to you. A bit of the disconnect: all this information is not being surfaced in OpenTelemetry, in metrics, or other layers, so knowing what's happening inside the runtime is challenging.
With wasmCloud we're addressing a good chunk of those challenges. We're not yet addressing "this request used three megs of memory." Getting to where we can say every single HTTP request takes this many CPU cycles and this much memory consumption — we have all the bits in place. With WASI P3 it's going to be easier to get these things more mainstream. Thanks for coming to my TED talk.
Marcin Ziółkowski 47:28
For our guys in Orange that was something we needed. While the linear memory model means you don't really need to read it because you can put memory limits in place — sustainability and similar-ish performance are something we found in our article. I understand this might not be the most important part, but that's something I have very often heard when I got asked about WebAssembly.
Lucas Fontes 48:21
One PR we saw passing by this week — totally related to what you're talking about: if you deploy an HTTP server in WebAssembly today to wasmCloud v1 and then 10 people deploy a copy of that same OCI artifact, you're going to have the same components loaded in memory 10 different times — each request is an instance parsing section to that user.
With wasmCloud v2 we keep track of the artifact being loaded in memory. If we have 10 people requesting the exact same OCI image, we only load it once to memory, and all the instances for serving HTTP requests come from that single instance. The penalty for homogeneous multi-tenancy is very, very low. So a lot of the tests here would be interesting to check against wasmCloud v2 — probably interesting, and we're pretty close there too.
Marcin Ziółkowski 49:48
I was looking for a way to raise my hand on Zoom. You mentioned — what's the name of that? Instance sharing?
Lucas Fontes 49:59
I'll drop it here in chat in a second.
Frank Schaffa 50:11
One tool — I don't remember the name — ran on eBPF. It would stack up over time all the different calls. You could see everything going on in terms of memory usage or CPU, and it would layer from when you call a function to all the way coming back. There were eBPF tools to do this.
Lucas Fontes 50:53
I think that's where Wasmtime was suggesting to go use perf. Which is okay for developing Wasmtime itself but not really for end users — we're looking for something more like cgroups, OTel, Prometheus metrics, that kind of stuff.
Frank Schaffa 51:21
And anything you can capture from eBPF.
Bailey Hayes 51:26
That'd be more for the network side of the world in this scenario.
Frank Schaffa 51:33
With eBPF you can get all the metrics — not just network. You can get all the events.
Bailey Hayes 51:44
You run into the same process pinning problem — we'd have to solve that to make that useful even for eBPF.
Liam Randall 51:54
Frank, I liked a tool we built when I was still at Capital One — sort of abandoned by now, but it did a good job of surfacing the type of metadata you get at the eBPF layer, which is primarily syscalls. Wasm is going to be sitting well above this. I don't know that you can granularly get the information there. Browse the screenshots — distributed syscall tracing across fleets of Kubernetes servers. Linked in chat: swoll. I like the little swoll gopher. Pretty dope.
Bailey Hayes 52:41
Another note Marcin — like Lucas said, you're interested at the higher level. Probably the cgroup or OTel path is the best one. There are microbenchmarks in basically all the main WebAssembly runtimes — that's how they granularly measure each of these things. If you look for microbenchmark you'll see quite a bit. It helps paint a more granular picture than what gets fuzzy when you throw a network into the problem.
Marcin Ziółkowski 53:31
All right, I have notes, I have links. I am satisfied. I'll keep stabbing this. I might be back in a week or two to see if I get anywhere. Thanks for your time.
Bailey Hayes 53:49
Thank you for coming and asking. Elizabeth was asking for your email, Marcin — DM that to her. You're both on our wasmCloud Slack as another option.
Liam Randall 54:01
I'll cross-connect you guys via email right now.
Frank Schaffa 54:30
From a documentation point of view — when is the latest documentation going to be available for the Kubernetes side? So I can easily go through and not have to debug much?
Bailey Hayes 54:54
I think it's all out. Eric, correct me if I'm wrong. He updated the deployment guide to include the gateway change.
Eric 55:06
Yes and no. Because we're doing example updates and revs right now, anything drawing on an example — including spots in the Dev Guide — will be a little shaky until that process is complete. The installation page, everything there should be completely good. All assets describing concepts, anything giving a sense of the architecture or features, should be up to date. Anything touching an example could be shaky this week as we get those all revved and in some cases moving them between repos.
Frank Schaffa 55:55
The performance tool I was looking for is called flame graph.
Liam Randall 56:10
Roman presented some flame graphs about Wasm before — although I can't remember if it was on the community meeting or internal.
Bailey Hayes 56:26
We get flame graphs out of the microbenchmarks I was talking about last time. If you're using Wasmtime performance testing suites, it's there. I also believe we have it in the Bytecode Alliance Sightglass project — but that's Wasmtime-specific.
Marcin Ziółkowski 56:52
Not sure if I have it, but it doesn't necessarily have to be cross-runtime for me. Not a requirement from a research standpoint.
Bailey Hayes 57:04
Then definitely check out Sightglass — just dropped a link. That's basically where we put all our benchmarking suite and tooling for both Wasmtime and Cranelift. We have a monthly cadence of cutting releases and cut about two weeks before the actual release goes out — that's when it undergoes extensive fuzz testing continuously for two weeks, and it also runs through the benchmarking suite. A lot of folks who created this also worked at Firefox — their gold standard is the "Are We Fast Yet?" web page. Flame graphs galore.
Bailey Hayes 57:45
Okay, well — thank you everybody. Appreciate you coming. Next week I think we'll have even more things to show because we'll have lots of templates probably by then. See you and have a good week. Bye.