Skip to main content
← Back

July 9, 2025

Agenda

  • DISCUSSION: wasmCloud Q3 2025 Community Roadmap Planning

Meeting Notes

DISCUSSION: wasmCloud Q3 2025 Community Roadmap Planning

In the quarterly roadmap planning session, we discussed issues captured in the wasmCloud roadmap GitHub project.

Support in-process component to component calls

Currently, these calls go over wRPC, which provides consistency but also brings a performance hit (on the order of microseconds).

Massoud: Requires more intelligence in host to determine what kind of call this is, which contradicts our other aim of simplifying the host.

Brooks: That’s fair. I don’t think the host should be making a game-time decision when a function is called or component is run — I think the host should be told specifically, “You’re running these two components and they’re going to talk to each other in process.” That’s how I feel from a host simplification perspective. We could include a transport field to links.

Remove usage of wascap and nkeys

This is a wasmCloud-specific approach to signing, and overall we’ve found that aligning with broader standards like sigstore is preferable. We’re currently not requiring that components be signed to run on wasmCloud, so this is a little vestigial.

Massoud: Using a more generally compliant framework is not a bad idea.

Taylor: The string format of nkeys may be useful for some people, so we should survey folks using wasmCloud, but overall I’m in favor.

Support configuring a host with shared capability providers

Brooks: Currently we use the application framework from OAM. This is a good for a couple isolated applications, but not so optimal at scale, since it undermines density.

[image from blog]

Massoud: Where are we going to capture the link configuration when the capability provider is source?

Taylor: I think we should reflect on the entire “source”/“target” conceptualization

Transition the capability provider model into support for wRPC servers

Brooks: Capability providers are both wasmCloud-specific and one of our most frictionful elements. I’d propose that we rethink this concept as a “wRPC server” — any binary, container, component, or application that serves a WIT interface via one of the transports available with wRPC (TCP, NATS, QUIC, UDP, etc.)

Simplify interface maintenance

Brooks:  A lot of our maintainer budget and time has gone towards development of standard interfaces and maintaining multiple implementations for those interfaces.

The main issue arises when we consider our support of standard interfaces that haven't yet reached phase 3, like the inclusion of wasi:logging, wasi:keyvalue, wasi:blobstore, and wasi:messaging as built-in runtime interfaces.

Pluggable extensions to the host would allow rapid integration and iteration on upstream in-progress interfaces while allowing us to keep a minimal, stable set that we support by default.

Taylor: wasi:config feels like one we shouldn’t get rid of — we need some sort of config solution.

Brooks: That sounds reasonable. I’d love to think of ways that we can make these plugins, too. The intent is definitely not to take away these capabilities but to decouple them from the host. But I think calling out config as a little different makes sense.

Reducing host responsibilities and API surface

Brooks: Today the host manages providers, reconciles state, and a number of other responsibilities.

I would really love to simplify some of these things — what I would like the host to do is start, have some set of config like the providers it should run or OTEL endpoint it should connect to, and then it should sit there and wait for a scheduler or orchestrator to tell it what to do.

I think the host should wrap our runtime, include a wRPC API server, and then let a supervisor/scheduler (wadm, Kubernetes, whatever) issue instructions to run, receiving those instructions via a control plane API.

Lucas: think it sounds good. much of the existing behavior is vestigial / pre-wadm

Intentional distributed networking

Brooks: Seamless distributed networking is a magical aspect of wasmCloud. In distributed systems, magic is a little scarier than it is desirable.

If you bring a component to wasmCloud that imports wasi:keyvalue, your component call is automatically transformed into a distributed NATS request via wRPC. So, instead of a nanosecond in-process call to a host function, it’s subject to transport failure, message loss—all kinds of different issues. Introducing wrpc:error support in our interfaces is key evidence of this, as it provides runtime-level error handling of these transport errors.

To be clear, component networking via wRPC is a superpower of wasmCloud’s distributed model, but it should be used intentionally rather than implicitly.

The distributed-by-default nature of wasmCloud is a superpower and a huge responsibility. It implicitly ties Wasm component invocation semantics with NATS core request/reply + queue group semantics, which is either awesome or frustrating for users.

I would like to lean further into our capability to inspect Wasm components and determine the correct course for an invocation. In combination with in-process component calls, try to avoid going over a distributed network where possible by making proper scheduling decisions and explicitly decide when to make a distributed component-to-component call by using a specific interface built for that. This will lead to more predictable and reliable workloads.

Taylor: I’m all on board with this. We could have something like a Rust marker trait but for WIT that returns whether you allow a distributed call or not.

Call to action: Leave your thoughts!

If you have thoughts on any of these issues, make sure to leave them in the wasmCloud roadmap GitHub project.

Recording