Skip to main content
← Back

Transcript: Q2 Roadmap Planning - Componentize the World, WASI P3, Multipath Routing & MCP Sandboxing

← Back to watch page

Watch on YouTube ↗

wasmCloud Weekly Community Call — Wed, April 8, 2026 · 59m 38s

Speakers: Bailey Hayes, Jeremy Fleitz, Frank Schaffa, Aditya, Yordis Prieto, Liam Randall


Full Transcript

Bailey Hayes 02:49

I like to make sure folks can get to this link. I shared it out of my Oregon Excalidraw plus. Everybody should have access and you shouldn't have to log in. Please let me know if it works for you or if I need to recreate it in plain Excalidraw.

Jeremy Fleitz 03:21

I was able to connect through incognito, not logged in. Perfect.

Bailey Hayes 03:25

Okay, nice — I'll just go ahead and share this screen.

Bailey Hayes 03:39

If I jump over to this tab, you can see the whole window. Okay, nice. We start at 1:05 Eastern. So we're here.

Bailey Hayes 04:33

Hello and welcome to April 8 wasmCloud community call. We are today going to do our Q2 roadmap planning session. If you haven't seen it, we have a discussion item out here that prompts folks to first think about: what's one thing we should prioritize this quarter — aka by the time we get to July, where do we think we should be, or feasibly could be by July? What's a rough edge that you've been hitting repeatedly that we should fix and pave away, and what would make you want to deploy this in production? Post your comments here throughout this week. The maintainers will work on converting these to tractable issues. Feel free to file your own issues, but we're going to collect all this feedback and create some actionable plans.

First let's collaboratively fill out what high-level themes and things we want to take on for this quarter. It's three months — we can definitely get in new features. We can improve a number of parts of the ecosystem. There are some big rocks that change things — like WASI P3 shipping will happen this quarter. I'll add that as a big one we can talk around. We already have a tracking ticket and some experimental support behind a flag.

I'll highlight a couple of the high-level themes — at least how I'm thinking about it. I think there's a lot to do in the realm of the component ecosystem — make X, Y, and Z compile to a component and provide that as an example, give people a really nice ergonomic development experience with WebAssembly. There's a lot happening in the world today with LLM fuzzing and new techniques to secure both open source software, ways to do it for WebAssembly, ways to do it specifically for wasmCloud. I want to leave a bucket open for that.

There are other things we want to add to our operator and platform experience. There's work Jeremy is doing right now that's in flight. Anything you can think of in terms of observability, runtime, operator Helm charts, customized integration with something in the cloud-native ecosystem — that's a good place to call out. And then separate from that, integrations with other ecosystems — building out support for MCP servers. I've done some experiments with that, but I want to leave it open to ideas of what wasmCloud should be doing to better support all of these new types of workflows.

I'm going to give you about five minutes and I'm going to be quiet. If anybody else wants to talk they can — but I'll be working on adding more themes.

Liam Randall 13:58

I just came in at a really quiet part of the meeting.

Jeremy Fleitz 14:01

Now we've all been waiting here for you.

Bailey Hayes 14:06

Hi Liam. I promised everybody I would be quiet, gave everybody an opportunity to add to our whiteboard, and now we're going to talk through the things that have been added. If you want to add things you still can.

Liam Randall 14:27

I think I see mine on the board already. Thank you.

Bailey Hayes 16:30

How are folks feeling? Need more time, or want to move into discussion?

Jeremy Fleitz 16:47

Let's discuss — seems like quite a bit here.

Bailey Hayes 16:51

I'll start in the top-left in the component ecosystem world. There's this big milestone we're about to reach and it folds into a lot of different things we need to break down. There's additional work in terms of what new things can we do — I'll bring that up as we go through a couple others, because I put in an issue I want to propose now rather than later.

As a project it would be great if we work together on componentizing the world. If we know there's a type of example we want to build that would be universally helpful, let's track it in wasmCloud and link to the upstream changes we had to make to make that real. For example, I could link to my PRs for launch-badges, sqlx, and side-rest. There are a lot of these, and what I really like about them is they're great good-first issues — they're small, concrete, something anybody can pick up, and they generally benefit everyone.

This one — I tossed out and I wonder if it's helpful or if it should be something external. I'm on the fence on whether we should go this route. We have thought about adding a wash compose command which would take your wash config and help you compose components together. We've been toying around with a YAML-style syntax because that's what we have in our config — but I don't want to prescribe that. Should wasmCloud as a project be interested in trying to provide composition tooling, or is that better served in external projects? And is that something the wash CLI should provide?

Another one — "Are we fast yet?" I really want to have a web page for our wasmCloud runtime that anybody can go to and see where it's at and what metrics we're aiming for next.

Another big thing — I've heard reports from lots of people, including messages in wasmCloud Slack — there's a major desire for a single host to serve multiple different backends for a given API. I have ideas, but understanding the problem is really important.

Jeremy, do you want to speak to this one that I think you added?

Jeremy Fleitz 20:31

Yeah — actually this is more — wasmCloud v2 was extremely community-focused. We just need to get vulnerability scans just enabled. Trivy, whatever, inside the repo, so we can show what we're building, especially since we switched over to do a UBI minimum. So we attest and say "yes, we do have a clear free type of image."

Bailey Hayes 21:06

You've added some of these too. Do you want to talk to this section?

Jeremy Fleitz 21:11

This is from EU KubeCon and talking with some other companies using wasmCloud — coming up with some additional integration examples but really more for getting started. Coming with basic local-stack configs that are kind clusters — here's a kind cluster with envoy configured as the ingress, and then uses v2 and how it uses services and endpoint slices for resolving and routing. Same thing with Traefik, same thing with Istio. These are just minor tweaks that are going to drive some Helm updates. The primary goal: wasmCloud is a quick, easy get-started and go. We want easy getting-started examples out for companies so they can say "this one's close to my type of setup right now — I can now use wasmCloud to extend our Kubernetes cluster." Exact same thing with resolving and routing with OTel examples.

Frank Schaffa 22:20

Would that include envoy gateway?

Jeremy Fleitz 22:27

Yes — there's actually a PR I should have ready for code review again today, but the example in there has the Kubernetes Gateway API as the manifest for integrating with the resolving and routing now exposed through service endpoints that are headless — no cluster IP required.

Bailey Hayes 22:59

To answer your question, Frank — yes, and because it's using the Gateway API you can fulfill it with whatever your gateway is.

Frank Schaffa 23:16

From the image scans and so forth — what about image signature, or some level of protection for images?

Bailey Hayes 23:27

There's some stuff here already. Are you thinking on the runtime operator — the container artifacts we push — or are you thinking about WebAssembly components?

Frank Schaffa 23:42

For anything that you're going to be running — that they should be signed so you know where they come from.

Bailey Hayes 23:51

Let me show you one thing. If you build with our actions toolkit — if you have a fairly recent run — every time you build with our setup-wash action and then use this action to publish, it comes with a couple of things built into GitHub where you can get signed attestations and provenance for your build and your component artifact, signed based on the OCI artifact itself. You could propagate this — if your enterprise has an internal SIG store, you could propagate that there as well. This is totally built into GitHub now — at the end of the day it's the built-in SIG store signing and cosign underneath.

Frank Schaffa 24:56

I think this should probably be more publicized.

Bailey Hayes 25:02

Happy to — looking for help for that. We have it in our docs and we have a blog post. It's hard to get people super excited about things when they just work and they just work the way everything else just works. I didn't get a ton of traction on the blog post I wrote about providing this. I love that it just works. The other thing I'm going to highlight: I've been sprinkling this everywhere I go. If you are doing WIT-only interfaces themselves — yesterday I published the latest version of WASI, and publishing that artifact is also signed and attested. You can see attestations for each of these — let's look at wasi-random. This also exists for those artifacts. When you use gh sarah to pull from this, you can also do sign and attestation when you're actually cloning it down. This trick is something you can do for your own projects as well — all of this is together in a reusable GitHub workflow. I feel kind of lame at how simple and easy it is to make it just work. It's just the subtle YAML right here.

Frank Schaffa 26:39

There's a lot of boxes to go through, but this is great.

Bailey Hayes 26:43

If we don't already have this — I'll verify — we should make sure we have it for our runtime operator and other images. I know I'm doing it for the wash CLI because I actually use that to validate on install. Instead of having to come up with my own "make sure you have the right public key and download from that," I use GitHub to do the validation and install. If you look at the install script for wash, you can see that.

Frank Schaffa 27:33

I actually added two boxes. One is multi-cluster support.

Bailey Hayes 27:38

Let's talk to it. Tell me about it.

Frank Schaffa 27:41

I just think it would be wonderful if we could have the operator running in multiple clusters and somehow they coordinate things. I don't know yet how effective this will be, but for resiliency and for performance I think this would be good.

Jeremy Fleitz 28:06

Kind of thinking — with Argo CD, when you have multi-clusters with Argo CD running, the runtime operator could be per cluster, but maybe we provide the Argo-CD-type pattern of "here's the destination cluster" and the runtime operator doesn't really know there are other ones out there.

Frank Schaffa 28:34

The trick is not how to deploy in multiple, but actually how to make them integrate and know of each other.

Frank Schaffa 28:50

That's probably a tough one. The other one is simpler — the box below — the performance metrics and results. It would be nice to have something we're always checking to see where we are from a performance point of view.

Bailey Hayes 29:08

I would love this with KCD personally, and also something we're just continuously running. I want to look at that before I cut a release. I want the historical data even from this. I could be lame and commit JSON to the repo, but maybe we'll use something else. I don't want to be overly prescriptive about the solution.

Matt, I know you probably added these two. One thing I want to highlight: we do have two commands that are standalone, non-Kates hosts. The first sets up a development server — a host preloaded with dev plugins that are really handy to work locally. If you're using blob store, we'll give you a file system, that type of thing. The other is wash host, which is literally just a host that's configurable and you can point whatever scheduler you want at it. I've definitely heard people ask about non-Kubernetes scheduler support. Right now we show folks "you can just drop the cube API server and point our scheduler — aka the controller — straight at it." Our controller doesn't need a Kubernetes stack to run. We provide a Docker Compose example today.

What I think other things in this domain are interesting — microcontroller and tiny-device support. We can now make a compilation that's tiny, because Wasmtime can go tiny now, and that's a big deal — a big change even from a year ago.

Moving on to integrations and ecosystem. I talked about this one already, but this came from our wasmCloud Slack — when we were asking for feedback for our roadmap session, folks would love to see SQLite. They gave a shout to Lightstream if possible, and PG Lite, SoLap. It would be nice to enumerate those so we can create our high-value target list.

There's been a ton of work in the upstream recently on making big-Go work — not tinygo — and having an ergonomic Bytecode-Alliance-supplied SDK and CLI experience, just like we have with componentize-go, componentize-net, componentize-py. It would be great if we wrapped our Go repo to be modernized in that view. That'll give us a lot of things that should just start working versus what we were previously dealing with — problems with the reflect library not being fully ported for tinygo.

I threw this in because I see this get asked a lot — folks are really confused on when they should use the service versus when they should use a host plugin. Building out examples to show the art of the possible with services is part of this, and another part would be just having that in our FAQ.

Bailey Hayes 33:46

Who was this? Aditya, do you want to talk about it?

Aditya 34:00

I don't have anything to add — but it should be covered in the docs, not even the FAQs, because there's a lot of confusion about when to use a service capability vs. a host plugin.

Bailey Hayes 34:20

A lot of the confusion also stems from not seeing any examples and numbers behind it. What does the scale look like between these two? Why would I do one or the other? What's the performance difference? A lot of these issues are tied together.

Aditya 34:38

The performance trade-off between keeping a TCP stream — someone mentioned it in the chat recently as well.

Bailey Hayes 34:48

Moving on to this section — suggestions, miscellaneous. Our templates having code in them is maybe sending some people astray when they're getting started. Maybe we have too much in our templates and we should turn that down to be more of a scaffold. This came from someone posting in our Slack — they suggested we consider moving things like our template service-tcp into examples and really fully completing that solution end-to-end. Maybe that template as standalone is not very useful for folks to get started with. We might do a CP — have a template that's useful for people and an example that's really powerful and explains it all.

I threw in our obligatory — has anyone done sandboxing on top of wasmCloud? Yordis, if you've got mic you can also jump in here. You've been working with Trogan AI.

Yordis Prieto 36:13

I joined late. My bad. If somebody have done any like AI sandbox stuff that are popping up everywhere now on top of wasmCloud.

Bailey Hayes 36:24

Obligatory — look up sandboxing MCP, just Google that.

Liam Randall 36:33

Oh Bailey, just pull it up real fast. Yeah — sandboxmcp.ai. I'm actually working on our phase two of that right now. So I'm heads-down and muted.

Yordis Prieto 36:47

The reason I'm asking is in the Trogan AI stuff I'm doing — it's everything on top of that. Full distributed mode is all over the place. Everything is stateful on top of JetStream. So I need something capable of spawning these sandboxes with all the capabilities into it. Ideally I get to manage which — I'm trying to do it with WASI. I'm trying to figure out if I can make the CLIs — cloud code, all the stuff — to work and hijack somehow the runtime to be WebAssembly behind, so I can inject my network policies and all the stuff at that layer.

Bailey Hayes 37:32

I'll tell you what — there is no better sandbox than WebAssembly if you can get it to compile to it. That's the big asterisk.

Frank Schaffa 37:43

In terms of MCP, there's a big push now — either MCP or APIs. CLIs probably will be as powerful as MCPs.

Bailey Hayes 38:03

wasi-cli runs straight on wasmCloud. Let me add a section for that — we could show some WASI CLI tools that are being used in the agentic space. I think wasi-http MCP in a workload is a no-brainer — we already have that.

Yordis Prieto 38:32

Ideally if you can showcase the networking policies that people are doing, the filtering. For example, I have one — a proxy that takes HTTP/1 and applies a filter that tokenizes the credentials. I don't know if you use VGS, but it's just a filter — takes HTTP, you have some code that runs, takes the tokens and replaces them with the actual secret you want out and in.

Bailey Hayes 39:11

There are a couple of layers there. The way cloud code works: it does process-level sandboxing — namespacing for your file system. It's using, if you're on Linux, this thing called bubble wrap, which is your process-level sandboxing. They have their own layer for BPF where they're saying "you can't create a socket" — so they can block syscalls at that level. They've been playing this whack-a-mole game of "well, you can't have your get credentials" — they add a proxy just for that — but then they don't have a proxy for your AWS S3 API key, and so that whack-a-mole has been continuing.

There are things people already have in place — existing controls they've heavily invested in, especially from an enterprise perspective, especially at scale — and that's just network policies inside K8s, which would be awesome to provide. I think they should totally look at that.

Yordis Prieto 40:22

Would you do it 100% WebAssembly?

Bailey Hayes 40:31

That's a big magic wand, Yordis. I've been waving that one as hard as I can for a couple years.

Yordis Prieto 40:38

I'm gonna make it work that way personally, even if I bleed out — I'm going for that route. I'm ignoring Linux and everything is WebAssembly 100%.

Bailey Hayes 40:52

While just being able to compile the CLIs is cool — what's powerful about wasi-http in regards to being different from other approaches to this problem is that we can actually do service chaining where it never drops out to the network. All the downsides for why people hate MCP — we just don't have it in WebAssembly. Essentially if you've got an agent loaded in your workload deployment, and it has tools it needs, you put that in your workload deployment, then you would talk to those tools over wasi-http, but those are all composed together in that same workload. Then you're not dropping out on the network stack — we do all of this via streams, and wasi-http, everything is built on the stream API. It's extremely efficient, and now you have a structured way of communicating with any kind of tool.

Yordis Prieto 42:04

Another one for you, Bailey. I'm trying to make everything on top of WebAssembly so that the agent gets to code for itself. All I get to do is the control plane of "yeah, sure, I'm going to give you networking or file-system, whatever that means, access." But they just ping me to "yeah, let me do this." They get to code their own stuff and compile/bundle.

Bailey Hayes 42:33

A lot of the policy layer for how we expose things to their interfaces is in the workload deployment spec. So you can look in there and see how we give you the file system. It depends on the infrastructure you're trying to run on. If you have Kubernetes, you have some advantages to offload things like blob storage. But you could go full — if you have all the RAM in the world, you could virtualize all those things with WebAssembly, a virtual file system for a lot of this stuff, and then you're sandboxed.

Yordis Prieto 43:27

I'm trying to go with local LLM. What I'm trying to do is create entire gigantic mesh of NATS super clusters, and everybody can just join into it and do some workload. I'm betting on Apple to figure out hardware encryption and other stuff for these things — hopefully by the time I make it work, the security aspect is solved.

Bailey Hayes 43:55

That's basically a classic grid-computing problem. So whatever you use for your grid infrastructure is a challenge there. Okay, Aditya — do you want to talk to that one?

Aditya 44:13

Currently our host HTTP handler is tightly coupled to Wasmtime's default send_request for handling outgoing requests. The initial suggestion was to add the outgoing request trait and just implement your own outgoing handle function. That was done for, let's say, if someone wants to add their own custom TLS certification business logic. I think it was Pavel. But with the addition of p3 we don't know how that's going to look — I haven't taken a look at the p3 merged code yet. I just wanted to bring this more into the circle.

Bailey Hayes 45:09

I think we need this one. You've run into it, Pavel ran into it, more people will as well. We have an issue or two filed and maybe one or two PRs. If you don't mind linking those all up so we're not forgetting anything — it seems like we tactically got the gRPC problems on, but left a couple other things valuable on the table. We don't have to do it again.

So everybody gets four dots — your dots. Put them on your favorite thing. Aditya, you added three more items. Talk to them.

Aditya 46:07

The multipath routing for allowing more than one component to export the incoming handler. Hear me out — I worked on a POC where we basically had more than two components in a single workload exporting the wasi-incoming-http-handler, and we gave them their own host interfaces and segregated them based on a path. I know this can be done easily if there's path-based routing at the runtime gateway, which I forgot about. So is there any trade-off between doing that at a higher level versus doing it here? Because if we give components the ability to have their own host interfaces, it could allow them to export the same interface but do something else.

Bailey Hayes 47:32

Let's do it. I keep dancing around it because there are more design issues I need to work through. At a basic level — I think this is where we've always wanted to be as a project. From the very beginning we've always been "we're going to solve this with WebAssembly components." The question was when, not if and why.

Why do we have native plugins? There's always going to be something that has to be done outside the sandbox. It might not necessarily be because it has to — it might be that "this isn't a sandbox thing, it has direct access to the hardware, therefore it must be a host plugin." There are always going to be cases for that. But just as many cases — especially with WASI P3 and cooperative threads — the world is wide open now on what can actually compile to WebAssembly and what should be executed within a sandbox. For my book, if it's a client and that's all it's doing, that should be a WebAssembly component, full stop.

Now if it's a client and it's providing what you expect the host to provide and you want to ship that down into the platform — today you have an option. You can compose that in to your workload. So for the problem Aditya was talking about today, you have a workaround — you could do WebAssembly composition, but you have to do that ahead of time before it's landed on the host. We still want to preserve the declarative principle for all our deployments. If it's not immediately resolvable with our basic resolution, that's going to break outside the declarative principle. You need a wac script or something like it to say "this is how it would line up."

There are also cases where I just want every component, every workload, to have this — I want everybody to be able to talk to NATS and be able to talk to Redis at the same time for the same key-value store. We've talked about being able to have workloads call host interfaces that are actually components. We've talked about wanting to have multiple backend instances for a given host interface. There's no better solution than WebAssembly component instances for being able to still preserve our multi-tenancy requirement and create different instances for different backends.

There are challenges. From a design perspective, if this is supplied by the host and I still want to preserve that declarative requirement, this needs to be part of the host instantiation process. When the host comes up it needs to be able to say "I provide these interfaces" — and if it doesn't work it needs to fail, because we can't schedule any work there.

There are other challenges around reentrancy, which isn't completely solved in p3 even though I really wanted it to be. I've made a couple different versions of this and played around with it. There's more work to do here to get a complete technical design plan.

Bailey Hayes 51:33

Aditya, you call it the sandwich problem. I'm only going to do this in p3 because I love myself. The sandwich problem in p2 with pollables — for folks listening: if you have a callback table you need one callback table to know who all to call back to. And the sandwich problem in p2 is that maybe multiple people have a callback table, and if you're the meat of the sandwich and you have your own callback table, the host isn't going to know to call you to call your callback. In p3 the concept of who calls who in an asynchronous fashion is pushed down into the runtime, and that resolves that problem.

The reentrancy problem is different — when I invoke a component and I'm invoking it from different components and they're all invoking the same API, I also need to have the right call when I'm coming back out, when each of those have resolved asynchronously. The way it should work is that I should be able to be spawning a sub-task for each invocation. There's some complexity with what we have today with wasi-p3. I think it could be in maybe a 3.x release.

One more I want to throw in is named interfaces. I don't know if you've seen some of the work that I've dropped on that — I want this as part of the ABI.

Bailey Hayes 53:41

It's literally the same thing. This is how I would propose we differentiate between "I want to use the NATS backend" or "I want to use the Redis backend." This, coupled with a host component, would give a really nice way to have extensibility of my runtime without having to create my own wasmCloud host. Sorry, that was a big tangent.

Aditya 54:23

I think that was it. The one about the cron job provider — the service being more complex. Victor fenced it in a GitHub comment. Right now it's basic — just sleep, wake up and fire. We need something more tangible for people looking to build production-ready, instead of implementing their own every single time. The overall community would benefit quite a bit if we did this.

Jeremy Fleitz 55:05

That's a great call out. Are you thinking on more aligning to a typical Kubernetes CronJob? Yep, that makes sense.

Bailey Hayes 55:19

Cool. We only have five minutes left — which means five minutes to put your votes in. I'll capture this and work with the other wasmCloud maintainers to put together the full project roadmap. I started on it here — we haven't really brought over things yet. I want this to inform that. Please grab your little dot and vote on your thing. Don't vote multiple times — I know you can. This is the honor system.

Bailey Hayes 58:00

Also, if you realize there's nuance lost when you're trying to figure out which one to vote for — which I kind of got frozen where I wanted "yes" as the answer — that's also really helpful information. If you want to drop comments on our discussion or directly on the roadmap, I would really appreciate the feedback.

Bailey Hayes 58:35

Did everybody get a chance to vote? I'm going to take a screenshot of this and drop it in the wasmCloud Slack so folks can see, and hopefully we'll get a little more feedback throughout the week. My aim is by the end of the week we've got most of this roadmap filled out, capturing what we think we need to get done this quarter. Thanks everybody for coming in and participating and contributing — because we're going to need you to get all this done. We're at the top of the hour. Thank you, and we'll talk more. See y'all.