Transcript: Template Refactor, Workload Config & Environment Variables, WASI TLS
Transcript
Bailey Hayes 00:01
Hey, I just ate the strangest lunch. My international market had a Korean self-heating lunchbox that was, you know, supposed to be like mala, spicy and stuff. Yeah, you put the little pack in it. Have you done that? I felt like I was — I've done it with an MRE, oh yeah, military stuff. I mean, this was good, although super spicy. And I just inhaled it so that I would be done before the call. And now I'm like, my eyes are sweating.
Colin Murphy 00:47
Brooks was the same — Brooks and Taylor once took me out, and they were just like, eating the peppery hot pot.
Bailey Hayes 00:55
Yeah, yeah. That's my favorite thing.
Colin Murphy 00:59
North Carolina thing. Oh man, slash Utah thing, wasmCloud thing.
Bailey Hayes 01:28
Well, we start at 1:05, let me go ahead and share the agenda. Hey, Aditya. Do you have any items you want to go through today? I just tossed up the agenda just 30 minutes ago, but we can adjust.
Aditya 01:52
Let me take a look at the agenda.
Bailey Hayes 01:56
Just dropped a link. I think we'll start with Eric talking about his templates refactor. I wanted to talk about the issue that I've been collaborating with Colin on, and then we could run through the roadmap on active items that we've got. I figured we can just always have that one as a placeholder — let's check the status of things. Also, Vance, it's good to see you. I haven't seen you in a while.
Vance Shipley (SigScale) 02:25
All right? Well, it's one o'clock in the morning, so — oh my goodness.
Bailey Hayes 02:31
It's not always convenient, but here I am. 12 hours off. Have you tried out wasmCloud v2?
Colin Murphy 02:42
Well, I have finally really tried it out. One thing to try it out, it's another to be like, I need to do something with this — and that's what I ran into. And it worked really well. Actually, no complaints. Very nice. Yeah, I guess, except for the issue that we have.
Bailey Hayes 03:01
Well, it's an improvement issue. You know, that's good stuff. Vance, if you haven't seen last week's call — Colin demoed last week with WebGPU work. I thought that was pretty awesome. That's definitely worth a watch if you haven't seen it.
Liam 03:20
Good. Yeah, I'm having a great time with it. I mean, frankly, the ability to just one-shot, or work with Claude and wasmCloud is just an amazing pattern. And I think it's great. So I'm having an awesome time, especially just building in Rust. I feel like things just work right out of the box, and they're blazing fast.
Colin Murphy 03:51
Yeah, well, it is pretty amazing. I mean, without Claude, that WebGPU thing would not have happened. I can tell you that right now. At least, definitely not in a week.
Aditya 04:05
So yeah, just a short reminder that our wasmCloud X posts are a bit broken. Every single live stream post is dated to 25th of February. I don't know if the automation is broken, but I'm just letting you all know.
Bailey Hayes 04:28
Where are you seeing that? Okay, I'm not very good at that, but I will — I did originally set it up so I should be able to fix it. It's supposed to be automatic from the YouTube as the source for this one. I think I'll figure that out. Yeah, working with X is not the happy place. I will say YouTube's correct. I did check that one today.
Bailey Hayes 05:04
Here's the YouTube link for anybody that maybe wants to help moderate questions. All right, let's get started. I'm going to click the live stream button now.
Bailey Hayes 05:26
Hello and welcome to our wasmCloud community call for April 29, 2026. We have a fairly short agenda today. Eric's been working on refactoring our templates for TypeScript and Rust, so I figure we'll start there. I wanted to have a discussion around an issue that I put in yesterday that I'm collaborating with Colin on, and I want to make sure that we think this is the right approach, and then we can get started on it and bring it into the roadmap. And then afterwards, just generally walk through our roadmap and do kind of a status overview. So first, I'll pass it off to Eric.
Eric 06:02
Thank you. So I just want to take a few minutes to walk through some updates, both in our template space and in our language support documentation. So these changes really started because we've got a really nice, fleshed-out TypeScript language guide now. We've got a lot of really solid TypeScript templates that we can start from for a component project. But our TypeScript guide has gotten way out in front of our other languages, so we want to bring the Rust language guide into parity with this. And there are also some new details to add talking about things like when you want to use wstd versus when do you want to use some other approaches.
So we started building out our Rust templates to try to bring them into parity, and in the process of doing that, realized that we had a bit of a naming issue. Because in wasmCloud, we have the service model as one of the elements of a workload. But we were also calling some of our templates "services," and these were not actually wasmCloud services. We were using that in a slightly different way. So Bailey and I talked about it a little bit, and we came to the conclusion that "handler" was probably the better terminology here for where we were using "service" before.
So for our stateless components — kind of what you think of as a standard component in our template files repos — you're going to see we're using the handler terminology now. And we've got a PR up introducing several new handlers to the Rust repo that lives in our monorepo at wasmCloud/wasmCloud. So you'll see those landing pretty soon, and that's going to give you most of the same building blocks that we've got here in the TypeScript repo.
Other than that, pretty soon you will see interface guides landing in the Rust language support section, walking through how to add these different interfaces, how to add these different pieces of functionality to a Rust component as well. So when each of these PRs lands, we should be in a pretty good place with our Rust language support documentation, and we can start thinking about other language ecosystems and also seeing if there are pieces that we don't have in both Rust and TypeScript that folks might like.
I'm curious to just kind of open the floor here. Are there any interface guides that folks would like to see? Are there any major voices that are not into the handler terminology, because that is something we're implementing right now — it's open for discussion. Just curious if anyone has any asks or feedback on our template approach right now.
Bailey Hayes 08:56
And in the chat, I posted a link where we got the name "handler" as well. So in WASI, for wasi-http, the interface is called handler — basically for what we're implementing now. It does also have a service world where you could target to say, "I'm a thing that has this handler." Essentially, so service could work, and I could definitely still argue for it easily, but I figured disambiguating is probably the move here.
Also, other big callout — we're getting a lot of PRs and outside community contributions. If folks give me a hand on reviewing some of these things, greatly appreciated. Even if you maybe aren't a doc maintainer yet, people coming in and saying "I've looked at it and this looks great to me, I tried it out" — that helps me alleviate how detailed I'm going to be on just bringing it through and getting it approved. We've got a bit of a backlog, so just let people know that contribution by looking at reviews, even if you're not yet a maintainer, is very welcome and appreciated.
And also on that note, I want to highlight that we have new maintainers. I should have done this last week because I got it all in last week after we did a governance docs rev, bringing in the governance docs that we provide for the CNCF TOC as part of us being an incubating project. We have a MAINTAINERS.md where we keep track of who's in what maintainer group.
And I want to congratulate Aditya for becoming basically a core maintainer now — he's in the wash maintainers list. As well as Pavel, who we've also moved up the contribution ladder as a wash maintainer. Jeremy Fleitz, who's been our key maintainer for the runtime operator, is a Go maintainer. So folks who are coming in and making contributions, know that we've got a contribution ladder, and when you want to move up that ladder, talk to us. We really want to keep growing the project. We're getting a lot of people building on top of it right now, and outside contributors. So more maintainers is better. Thank you, and congrats, y'all.
Any questions for Eric on any of the other kind of docs you want to see before we move on to the next topic?
Bailey Hayes 11:42
All right, I will bring up the GitHub issue that I filed last night. So this is something that we wanted to add as part of v2, but we eliminated it from the scope because we did have a way to work around this by doing a host interface with wasi-config. So you could basically pass in configuration values when you're doing wash dev with it. But it wasn't ideal, and we knew that. But some of these things can be a little tricky.
And so this is something that Colin ran into while he was developing — when exercising a project. As he was working on building out the WebGPU demo that he built, he ran into wanting to be able to just pass environment variables, completely reasonable, in his wash dev loop. Now, when you think about that, it can be tricky. There's a bit there where you want to really think about the security considerations for that, and so we want to make sure we do the right thing there.
Now, there's quite a lot of prior art in the space. I captured some of the big ones that are very similar to us. But basically at a high level, the idea inside our wash config YAML — today all we have is basically build and dev. The build step is to feed us the information we need to get a component, and then the dev step is to give us a developer loop where we can spin up a host that's special-purpose for local development, have the right-sized plugins by default for it, but also make it customizable.
Now this proposal says there are basically other things that we'd want to pass to describe what we want to run when we're developing this component. I definitely waffled a little on — should I fold this up underneath dev, or should I make this its own section? And I ultimately, what I'm proposing here is to have it in its own section called "workload," and then within that section, just basically giving a simple way for us to pull in from a file, from a literal, or from environment variables — config and secrets. Still making it so that we can basically also be very in line with what a workload deployment will ultimately be. But I do want to make it really clear that while that is a goal — that I can do kind of a one-to-one translation — it is not specifically that this is Kubernetes. The goal here is very much to just keep this contained to a really nice configuration for a developer. You don't have to be deeply familiar with Kubernetes to look at this and know what that means. So that's a very key design goal as well.
I have a couple open questions here that I welcome feedback on. I appreciate Aditya already kind of jumping in. I want to make sure that we go ahead and just agree on it, because I think this is something that people need right now. And if we get agreement on this and alignment, then I would say, let's go ahead and add it to the wasmCloud roadmap and get it rolling. So now, questions, concerns — I'm opening the floor.
Colin Murphy 16:25
Yeah, I think that would solve my problem, at least.
Bailey Hayes 16:30
Are you disappointed that I made it so big? Because I think you thought you were going to have a quick little diff, and then I'm like, yo Colin, check this out.
Colin Murphy 16:37
No, no, it's fine. Yeah. I mean, I kind of knew it was thorny. I didn't quite understand fully how thorny it was or how much there was to it. But yeah, I mean, let's just say, without Claude, I might have given up. Because it's like, okay, well, then I've got to have this whole — because I was working in C++, right? Not the usual path. And so I had to have that whole host configuration section, that was a new WIT, a new bindings-generated header file, and then calling that function. Nothing crazy hard, but it would have taken me some time if not for AI to just work through what I had to write. So nothing — it would be, you can imagine that many people working would run into this very similar path. So I think it's important to address.
Bailey Hayes 17:45
Yeah, and in the near term, for sure. Yeah, Frank?
Frank Schaffa 17:51
So I was wondering about secrets, and I think somehow even in dev, we should be very careful with those things, because you never know when those things are going to move up the chain. So I don't know if there's a better method just to say, okay, now nothing that could compromise should be there.
Bailey Hayes 18:17
I mean, honestly, it's why I took the punt to begin with — why we scoped it out originally — because I want to do the right thing there. I want to highlight, you guys have probably all heard the Vercel incident that happened very recently. It was literally because you could get to the environment variable that was brought through from their approach. So I think there are some clear guardrails that we can put in that will keep us safe.
There are other approaches that I did consider. Some people who do full GitOps shops use SOPs from Mozilla, and they basically encrypt it and commit it. Now that's an interesting approach because then it is always there, but people can't decrypt it — you're protecting that API key. It's nice in that if somebody was a full GitOps shop, they might be into that. And maybe we can consider putting that behind a flag. I just felt like that could also potentially be bringing in too much scope and be asking for trouble.
But letting people put environment variables like a .env file is its own kind of risk. Now this is a super common pattern, right? Everybody has .env and other approaches that a lot of people are familiar with. But when you go and look at the swath of types of CVEs that have happened in the space, committing your .env file is one of the most common ways to let secrets get out.
But I think there's some guardrails that we can add that are pretty nice. Aditya calls this out about making sure that we're not referencing things that are outside of the project root scope. The second one is making sure that it is in .gitignore. So anytime we're told to go load this file, and then it's not actually part of the Git repo, we can be like, "yo, hey, you might not want to commit this." So there are certainly things that we can add. And the reality is that the .env approach is still the expectation. I think the SOPs approach is still more emerging, and people are going to prefer the .env approach, and we're going to have to take quite a bit of care to do that correctly.
Frank Schaffa 20:54
So, Bailey, you mentioned guardrails. How are those guardrails enforced?
Bailey Hayes 21:01
Yeah, it would be for the most part enforced directly in our wash CLI. Adding a check to say that this basically needs to be in .gitignore if we're loading a .env file for a secret. Adding a check to say that it's part of the project root, that we're not loading files or symlinks off on the disk. That's really also just to protect us from all the path traversal attacks nowadays — that's a whole new vector that we have to consider.
The other side of it is that we've added the ability to basically bring in these as environment variables, so somebody could use direnv or their own shell and have it exported as an environment variable. And then we will take care within our wash CLI that we never write this to disk. All of this for us is in memory, and we load it. So probably for at least my shop, what I plan to do is to do all of these as references to environment variables that are loaded external to this process.
Frank Schaffa 22:19
Yeah, because even if we have this section in terms of secrets and so forth, nothing prevents somebody from giving a different name and still putting this kind of information. So this is why I think the guardrails that actually says, "how do you have secrets there?" — they will be easily bypassed.
Bailey Hayes 22:42
Well, one of the things that helps us is that we do recognize that this type is of type secret. Now, if they go and put it in the config section as an environment variable, yes, they would just be able to do that. And that's part of the reason why I actually brought secrets into scope for this — specifically to prevent people from YOLO-ing dev, treating things that are secrets as regular environment variables.
So what strengthens us and what makes us better than a lot of these other projects is that we literally are typing it. We're saying, "oh no, this is of type secret" — we're never going to write it to a log or anything else. It is typed as a secret, and we have to treat it as such. Config and environment variables, you treat them that way too. I honestly would probably treat it all as carefully as I could. But yeah, I think there are more guardrails here than some of the more layman tools that exist on the market today.
Frank Schaffa 23:39
Yeah, it would probably be interesting to have at least a link to some page that says, "okay, this is best practice, avoid doing this, that, and so forth," because then it becomes the application developer's responsibility.
Bailey Hayes 24:01
Yeah, yeah, let's go ahead and just call that out.
Eric 24:20
And I'll add that we have a workload security page in PR that I think would be a good candidate for including some of this.
Bailey Hayes 24:32
Cool. Any other comments on this one? All right, well, I think I'm going to leave it open maybe for the rest of today, and then I'll turn it to "ready for work." Just want to give it some time, make sure we get some feedback before we just dive in. But I think hopefully I've got enough information here — basically anybody could pick it up. And I'm happy to code review that.
So let's jump over to our wasmCloud roadmap. What we're actively working on right now is quite a lot of things. Last week we talked about microbenchmarking. What I've been working on is trying to get us infrastructure to use for that so that we can reliably reproduce what I'm getting locally on Bailey's old gaming machine and Bailey's MacBook. Let's do something a little bit better than that.
I tried out a tool called Bencher, but for open source, it is only five minutes, which is not enough time for me to run my benches. So right now, I'm playing around with setting us up with Hetzner and getting us a bare-metal box that we can run this type of work on. I've also now added an IOPS Valgrind bench as well, so that we can get a little bit more deterministic CPU-based counts, which I would want to include in our comprehensive benchmark suite. So again, my area of focus is still very much on the micro benchmark side of this world.
We want to add k6 benchmarks — that would be actually hitting us from a network perspective and bringing in Kubernetes and our controller into scope as well. Nobody's working on that right now, but that would be a cool thing if somebody wanted to work on that.
We're still tracking some of our install stuff. So last week we talked about the proposal to do automated releases every two weeks on Tuesdays. We did some maintenance work around that where we now have winget, the Windows Package Manager, official. So the next time we trigger a release, it should automatically publish straight there. And then Dan went and updated us to basically, instead of using musl with glibc — so that when we build on Linux, you can get all the WebGPU goodness that people depend on. That's not in the Homebrew version yet, but earlier this week we made the change so that when we publish our next version, in theory, that will all just work and link up the right Linux brew package, which would be glibc.
And then I think it's probably worth — we've got the HTTP client plugin. Aditya, do you mind taking this one?
Aditya 28:05
Yeah, so this is based on a comment that I left. I've mentioned it somewhere in chat. So currently, when we build the test binaries, they get built even when we use the wash runtime crate downstream. We want to prevent that. And for that, I thought it would be a hypothetical idea that, why not add a ghost crate — non-published, of course, keeping it on the dev side and referencing it as a dev dependency — so that we don't have to depend on an xtask workflow or a just file. And cargo build just picks up on the build.rs, on the dev dependency, only on cargo test.
And I think d-man actually created a pull request for this. But we also had another version which uses xtask, I believe. I just wanted to get opinions on which — I think it's the first and the third. So which one of these do we go for?
Bailey Hayes 29:31
Opinions, folks? I will be straight up — I did take a first blush at this on trying to do it with basically setting cargo config flags, and that just totally falls apart because it's not always clear when it actually needs to be built if you're running cargo bench or all these other things. So I think that approach doesn't work.
So now we're discussing two potential ideas. One would be having it be an external task, but then that means that when people just run cargo test, it might not work — they actually have to know about doing some other thing. Now, I am getting dangerous in my Rust, but I don't feel like I could authoritatively tell you what the idiomatic approach is for this. So I also really want feedback from outside folks on what we think is the right approach here.
It is definitely biting people, especially since we've had three or four people take a swing. Must be important. So I do definitely want us to get a solution quickly. And also, d-man, thanks for being awesome. I actually don't know your real name, but d-man's a pretty sweet handle. And thank you for being awesome and trying a couple different approaches, putting things up so we can decide.
Aditya 30:58
Because the ghost crate approach — having a sidecar crate — it's definitely a bit unusual. Let's say it is just something that popped up in my mind. Having the approach of having a dev dependency not getting compiled at all downstream or in a cargo build, which is why I proposed it. But I'm a bit afraid.
Bailey Hayes 31:23
And there is other prior art that I'm aware of. Obviously, we could take a look at exactly what Wasmtime is doing here, where it also just has a ton of test programs that it builds and does it all via build.rs. And I think this basically is the same as us on our current approach. I think I literally told Claude to look at this and kind of follow it. But I want to do the right thing, so please give us feedback. And I'm not going to chime in on it, if you don't mind, Aditya, for like maybe another day-ish. Hopefully other people can.
Aditya 32:07
No, that's fine. I get it.
Bailey Hayes 32:12
Yeah, I'm not sure — you had made a ghost crate already, right? I mean, you had made a version of this that was working fairly well, that refactored this last week, I want to say, right?
Aditya 32:30
I think it was the consolidation of everything in the target directory, the parent target directory.
Bailey Hayes 32:37
Yeah, that was nice, and that worked really well. Oh well, we've had a lot come in. This one — doing this single virtual workspace, I thought was a great improvement. It's similar, right? It's definitely in the vein. So if folks are chiming in, probably also take a look at some of this as prior art as well.
Aditya 33:10
All right, I had something else. It's about WASI TLS, the p3 implementation. Are we including that in our Q2 roadmap, or is that a bit far ahead? I've already thought of picking that up.
Bailey Hayes 33:28
I knew you were into it, so I went ahead and created the issue. So I saw it when I did the Wasmtime 44 upgrade — and you had mentioned it before that they had already added it as something that is already there. Dave Bakker has been working on that on the Wasmtime side. And I think if we put up all the signal flares — experimental, might change, all the things — I'm totally game to bring it in.
I would like to treat it kind of like how we treat WebGPU, where it's very much an opt-in feature, because I expect it to potentially change. And it is not even a phase two yet. So that's super early for WASI proposals, especially since it's something that touches something in a security context. Then I really want to signal-flare all the things around it. But I think it is generally something that's ready for work. So if we give it all those caveats, I think getting it and exercising it and providing implementation feedback for the standard is the best thing we can do as community stewards. What do folks think?
Well, I'm not hearing any dissent, so Aditya, I'll go ahead and add this. Sounds good.
Colin Murphy 34:59
So what would WASI TLS mean for Kubernetes?
Bailey Hayes 35:08
Good question. What would WASI TLS mean for Kubernetes?
Colin Murphy 35:12
I mean, because in Kubernetes, you know, the TLS is terminated at the Ingress, or Istio, or whatever. So if this is like, you're doing your own TLS, would you still use TLS termination? So my question is like, when would you use WASI TLS versus TLS termination?
Bailey Hayes 35:38
Yeah, so a lot of WASI TLS — the goal of it is that so for instance, people have always had kind of a weird time doing like Python inside of wasmCloud because they want to use requests. And requests actually negotiates TLS at its layer. And WASI TLS gives us TLS termination in the host, so that we can support things like requests and other HTTP libraries that do their own TLS negotiation. So it's kind of complementary to what you're thinking of, where in Kubernetes you'd still terminate at the Ingress — but this is about what the guest component can do at the application layer.
Colin Murphy 36:23
That's great. Because I know for a fact that that kind of stuff is like a huge request from people trying to get it to work.
Bailey Hayes 36:36
Yeah, for the Python ecosystem in particular.
Colin Murphy 36:42
Oh great, for the Python ecosystem. Okay, so this is primarily like a Python thing? Or any language where TLS is negotiated by the library?
Bailey Hayes 36:49
Any, any language ecosystem. But Python and .NET are the ones where you see it more because they expect — a lot of their HTTP client libraries expect to be able to negotiate TLS at the application layer rather than relying on the host to do it. And so WASI TLS is like, "host, please do TLS for me."
Colin Murphy 37:07
What about Rust? It seems like we'd want that for Rust as well.
Frank Schaffa 37:29
I think the main issue is in Rust, it would be reqwest, right? Or like, any of the HTTP client libraries that do TLS under the hood, like hyper-tls.
Colin Murphy 37:54
Yeah, so the question is, does reqwest compile to WASI now?
Bailey Hayes 38:13
Well, the challenge with reqwest specifically is that it uses system calls that are blocking, so it's making a bunch of system calls. Whereas if you use the wasi-http adapter approach, that's all async and non-blocking. So reqwest has a particular challenge there. But other libraries like the ureq crate — that might be a better candidate.
Colin Murphy 39:06
So what Rust crates can we now support with WASI TLS?
Bailey Hayes 39:16
Good question. Anybody know? I think right now he's focused on the host implementation. I don't think he's actually gone and tried to make a swing on any of the guest implementations. Because again, phase one — so nobody would depend on it right away. He went and did it for .NET end to end with WASI TLS because the .NET team needed it, and there is a pretty comprehensive implementation — but not what you care about, Colin.
Colin Murphy 40:02
Well, and also, if we're going to be like "put your stuff in Rust and then into a Wasm component" — I think reqwest does work on the socket layer. It's blocking, making a bunch of system calls, whereas other libraries are not. So maybe we can get those working.
Bailey Hayes 40:25
I mean, if you look at this API, y'all — you'll see what I mean. It just basically complements an existing socket connection, and it is the world's smallest API surface.
Colin Murphy 40:40
Well, I think that's a challenge. It's always been a thing, right? You want to get these HTTP libraries that everybody uses to work. And I think for some of them, you have to do it at the socket layer.
Bailey Hayes 40:57
Well, we're going to need WASI crypto. I think ultimately that's the thing that needs to exist to really enable more of the ecosystem coming online. For what it's worth, I did get a chance to talk to a research group at Université de Montréal, and they have a PhD student there that is interested in actually driving forward and doing more research on WASI crypto in particular, and getting that wrapped with WASI p3.
So I'm going to stay on top of that as best I can. They've been doing great work out of that group for doing hardware-level interfaces. I don't know if you've been following that — they proposed SPI, USB, I²C, etc. So they've got the chops to build out different types of WASI proposals and implementations.
Colin Murphy 41:48
I mean, you definitely would want to be using the hardware-accelerated stuff or like ECC and stuff like that. Yeah, it's a pretty big effort.
Bailey Hayes 42:01
Yeah. Other things that are on the roadmap that are kind of in flight — I just flipped it over this morning because Dan Phillips, new contributor to wasmCloud and new member at Cosmonic, has been taking a look at doing the implementation and design for host components. So just so folks are aware that that's underway.
Frank Schaffa 42:35
For the lack of things to do — it'll be nice to run regression tests on the wasmCloud docs, because the Kubernetes stuff is still pointing to 2.03. And I tried, just for the sake of it, and it's not working. I didn't debug it, but just FYI. When I did the Helm part, it failed. It shouldn't fail — everything should be open and so forth. But it was saying that I was not authorized or authenticated to get access to something.
Eric 43:29
Yeah, we're right now bringing that up to 2.04. But it's a good thing to bring up. Thanks.
Bailey Hayes 43:42
What do you, Eric — do you have, off the top of your head, what that might have been? Why there would be a "not authorized"? I'm not sure we've made any changes in that space.
Eric 43:56
Not off the top of my head. Have to follow up.
Jeremy Fleitz 44:01
I think on 2.03 there was a bug. Well, I don't think it was with the default values, though, but it was around the Ingress. And if you had a flag enabled underneath the Ingress, that it would possibly create an invalid YAML around the role binding. But that was fixed in 2.04. So I think if we just update the docs for 2.04 and then follow 2.04 steps, that should work.
Bailey Hayes 44:29
Okay, yeah. I mean, that's a cool idea, though, about using the docs to drive the test, right? Cargo has that with cargo examples. One other thing that's worth calling out in that domain — I did put up this PR. Anybody that's willing to review it — it is big. For a regression that Eric had found, I kind of went all out on doing a full testing pyramid for this flow, just because it was a race condition that was really, really hard to capture.
So I did it — I added basically different types of regression coverage for different layers, and the final one ultimately being a full KinD cluster spinning up with a messaging round trip with TLS enabled. All of that refactor was actually to also make sure that before we ever publish an image to Canary even, we're doing this full validation end to end. But it isn't going through the docs — those are decoupled today.
Aditya 45:41
It's about the host component plugins. Essentially, we're going to take all the present native plugins that are present for the host and get them to compile to WASI p3 so they support the sandbox model. Won't all the plugins that we have currently need to have the p3 support built into them? And moreover, if we are going to be implementing the raw WASI p3 plugins, will we need a helper crate that has all the correct abstractions, all the encapsulated types, so that all those host components can be built in a reusable manner? Because that's a thing that's going to take quite a bit of developmental effort.
Bailey Hayes 46:39
Yeah. I mean, I consider this very much a large t-shirt-sized feature. I actually think it's out of scope to try to convert all of our existing native plugins. We're always going to have to have support for native plugins — WebGPU is a great example where it's taking advantage of hardware-level APIs. That's just never something that we're going to put in a component.
And I think when people are building out their own hosts, there are advantages to being able to have super user access to things, right? You can control and create your own threads and YOLO. So I think that always has to exist.
One of the reasons why we wanted to get started on this early is that a lot of people want to just do effectively clients. A lot of the major requests that we're getting for a lot of these plugins is just "make clients." And part of the reason we were talking about this was also Jeremy's been looking into expanding out clients to other ways to talk to other services, like Vault directly to get secrets, same thing — it's a client.
And not only that, but yeah, we really do want that sandbox. So I think if people are interested in converting them after we get some of this infrastructure, maybe. But I'm actually saying native plugins still continue to exist, including the ones that we already have. And I agree with you — there's going to be a lot of SDK tooling to make your own host component plugin building good.
Aditya 48:17
And also, if we are going to make clients, does it make sense to be making the WIT interfaces for them? If a component can just do the call in itself, does it make sense to offload it to a client component? Because, like the one I've linked in chat — I'm also building the wasmCloud NATS interface. Does it make sense for me to continue making that, or just depend on the client?
Bailey Hayes 48:55
So this question comes up. It's good that we're hashing it out. For the same reason that people actually argue, "do you really need a wasmCloud secrets interface when I could just do this on an environment variable?" And the fact is, yeah, you can just do it on an environment variable. You lose types, but you also lose the ability to extend it and enrich the context.
So if you want types, and you want to build SDKs on top of those rich types, then you probably do want to build your own WIT interface, especially if you're building an ecosystem around it and you — the platform admin — are tightly controlling how these things work. And once you've created the WIT interface, you can also do composition with that interface. So if you're interested in doing some kind of chaining of that API, then that's another good candidate.
However, it comes with downsides, because it's more complex. It is going to require you to maybe even provide — while bindings with WebAssembly interface types are pretty good, and we strive really hard to make it feel idiomatic to that language across all the different languages. And one of the things we say is it gives you SDKs for free. But sometimes that's not quite exactly right, in that I've found in practice, most people generate their SDKs and then add a little bit of niceness glue on top. So there's probably a cost there to consider.
I think my general answer is that NATS is kind of special in this scenario, because you're talking about trying to provide an API that's probably going to have some connection pooling. It's probably going to want to do some stateful operations, some client-side cache, server-side cache — when you start talking about all the affordances that you want to add for JetStream, there's a lot of opportunity to enrich that context. And that's where a WIT API would be really nice to have.
And if you're just making a client, and if the client is HTTP, my opinion is probably not WIT — my opinion is probably just do wasi-http, because we've got streaming built in, back-buffering built in, load balancing built in. They probably already have a really nice HTTP SDK already provided for whatever language right upstream by somebody else. And because we've been over the past couple of years upstreaming wasi-http to all these language SDKs, things that they're already familiar with, they're already building with, just work. And so I wouldn't want to change that on the guests that are using it.
So if you're doing HTTP, I probably would just do a wasi-http host component. If you're doing basically not that — something that's doing its own socket-level protocol like NATS — then you might want to choose to enrich it with a WIT interface.
Aditya 51:50
Really appreciate it. I'll just keep working on it. Thank you.
Bailey Hayes 51:58
But it's a good call. And I honestly think people are going to choose different things, right? Both can exist. That's kind of the point. And whatever starts working really well for people — that's the direction I think we should evolve.
Bailey Hayes 52:18
Well, I think that's it for the items that I wanted to call out and kind of our roadmap status. Any other topics folks want to cover before we end the call?
Bailey Hayes 52:36
All right. Well, thanks again for the awesome discussion, y'all, and see you next week.