Transcript: wasmCloud v2 Pre-Launch, StarlingMonkey, componentize-js Rewrite & JCO P3 Streams
wasmCloud Weekly Community Call — Wed, March 18, 2026 · 34m
Speakers: Jeremy Fleitz, Victor Adossi, Bailey Hayes, Liam Randall
Full Transcript
Jeremy Fleitz 00:17
Hi Victor. I'm curious how many people show up. Obviously we'll still carry it on, since a lot of people watch it from the live stream or YouTube.
Victor Adossi 00:30
Yeah, the JS meeting also doesn't have anyone on — and a lot of people are at Wasm I/O.
Bailey Hayes 01:08
Hi guys, I'm alive. I passed out, but I'm back alive. I flew on a plane and didn't sleep — people were sleeping. I slept hard.
Jeremy Fleitz 01:30
That's a good thing — you've been burning midnight oil for quite some time. I get on a plane, doesn't matter what seat, I'm out. My wife really gets jealous of me.
Jeremy Fleitz 04:17
Welcome everybody to the March 18 wasmCloud Wednesday. Hope everybody's having a great week so far. We have a little bit of a short agenda today, really due to Wasm I/O going on and KubeCon coming up. I had a little bit of an echo here — I started the playback from the live stream and that was kind of odd. Three main agenda topics: wasmCloud v2 update, Wasm I/O, and the JS ecosystem.
Liam Randall 05:00
Jeremy, you can just close the window.
Jeremy Fleitz 05:02
I'm gonna hand it over to Victor, and he's gonna be talking about the JS ecosystem. With the conferences going on, Liam is joining us at a nice comfortable roughly 35,000 feet over the ocean — give or take 5,000 because I know it's normal 33-34 for airlines. We have Bailey from Wasm I/O Barcelona with us today. To start off — the wasmCloud v2 update is still looking great. We are going to be officially releasing that. Bailey is from Wasm I/O cutting it shortly, and once we do that we are going to be updating the documentation and promoting that, as well as the wasmCloud TypeScript repos.
Liam Randall 05:51
I'm so sorry — can you hear me? Can you close the YouTube window in the background or whatever other window it is, because we can actually hear you twice on the live stream.
Jeremy Fleitz 06:01
Oh my god — that's what it was. Okay, thank you. Because that was really annoying. I thought it was the other — yes, thank you. So v2 update, yes — that'll be cut very shortly while Bailey is at Wasm I/O. Documentation and anything will be promoted at that time. So be on the lookout for that. The next thing is Wasm I/O obviously currently going on in Barcelona, today and tomorrow. Bailey, anything you want to add on how it's going?
Bailey Hayes 06:56
I've been running through various types of verification. I've got a couple more little documentation things I want to land right before we cut it. Be online and available to help. Since it's already after six here, I'm not going to be cutting it today, but I am aiming to cut it in the next two days.
Jeremy Fleitz 07:17
Good deal. After starting Monday we're going to have CNCF Wasmcon in Amsterdam, which occurs the day before EU KubeCon. If anybody is going to one of those, please stop by our booth, we'd love to connect. Finally, we're going to hand it over to Victor to talk about updates on the JS ecosystem.
Victor Adossi 07:48
Thanks, Jeremy. A lot of the notes are already in the community meeting notes, but I'm going to go through them and give everyone a chance to ask questions. If we start from the back — from the JS engine going forward — we've got a new SpiderMonkey release with a lot of really good work from Tomasz and Till who have been updating the engine to a newer version of SpiderMonkey.
The version of SpiderMonkey inside StarlingMonkey has been updated. The project is, of course, StarlingMonkey — so 0.30 recently came out. The release date I think is incorrect because it was released two days ago, not on the fifth. A lot of upstream changes have made it in. One of the things Tomasz was able to get done was re-enabling Weval — the optimization layer that was disabled for a while at the StarlingMonkey level after some changes late last year. Having that back is nice — for the trade-off of a slightly bigger binary, it can increase performance quite a bit.
Heading up the stack a little bit — away from the JavaScript engine and into componentize-js land. There's componentize-js, which Joel is working on, which is actually a rewrite of componentize-js. The link is in the notes. It's basically a rewrite of componentize-js to Rust, but in the meantime also switching from the methodology we used before — bit of C++ splicing bindings into StarlingMonkey Wasm output — to use wit-dylib instead.
For people who don't know what wit-dylib is — it's an abstraction that lets you implement a single header file and end up with a fully-working guest component, free-standing component that wraps a, let's say, interpreted language. You can actually use this for compiled languages as well, but generally the usual case is for interpreted languages, as compiled languages can have a direct binding that's more efficient.
If you look at this header file, in your language of choice — if your language has proper affordances for FFI with a C ABI — if you can implement this header file, wit-dylib can take your Wasm shared object (when you build it you just build a Wasm, but a shared-object-style Wasm) and turn it into a P3 component. So it has implementations for all the classes you'll see — streams and futures, all the P3 features.
componentize-js is moving from the old methodology to this new wit-dylib methodology, and a lot of the work is actually done already. Joel's been working pretty hard on this — he's got a bunch of stuff working as well as examples. Look at the CLI example, the HTTP example is pretty intense. It's slightly uglier looking — some types are yet to be hidden behind a slightly more ergonomic interface. But this mostly looks like what you would have for a wasi-cli run. He's doing what you would do in a console.log — he's using the lower-level WIT interfaces to write to standard out. That's going to change in the future because we'll have a console.log shim. This is what it takes if you did it fully from raw WIT interfaces — this is what it would look like to write to standard out in P3. As you can see this is P3 — u8 stream, you've got a stream of u8s, you write via the stream, and then you have the transmitting and receiving ends of the stream.
P3 or async behavior on the guest side is coming to the JS ecosystem, and it's quite quick — partially already here. There's some more work that needs to be done, but it's quite usable at this point. If you've been waiting to play around with it, this would be a great time.
Another project that is also interesting is componentize-qjs, something Tomasz has been working on. It's similar to componentize-js, but whereas componentize-js depends on StarlingMonkey which depends on SpiderMonkey, componentize-qjs depends on a Wasm-ready build of QuickJS-NG (new generation). For people who don't know what QuickJS is — it's a really small JavaScript engine that was written by Fabrice Bellard. He wrote QEMU, a bunch of really prolific... I don't know, I guess you could call him a programmer, but that seems like underselling it. QuickJS-NG is a fork of upstream QuickJS with more people maintaining it, and there's a Rust wrapping of this JavaScript engine.
There are a lot of JavaScript engines out there, and surprisingly QuickJS is actually really well positioned — it passes a lot of tests and is reasonably fast. Of course, it's very difficult to beat V8 and larger engines like SpiderMonkey on both speed and robustness, but QuickJS strikes a good balance and has a Rust binding that makes it easier to use. That's one of the reasons Tomasz went off and did this work. One of the great things is that this also uses wit-dylib, so we get async basically. He's worked on implementing the second sort of patching through, essentially implementing that big header file. So that work is also going — he's got quite a bit of it done. This really sets us up for a world where componentize-js is multi-engine, or multi-runtime. In the future you'll be able to choose between StarlingMonkey as an engine and QuickJS as an engine, or another JS runtime if there's one that's beneficial for any reason. For example, one of the nice reasons to use QuickJS is that it's much smaller — less robust and slower in some senses than StarlingMonkey, but much lighter.
Victor Adossi 17:53
Moving up the stack yet again — the next bit that is interesting is the JCO work. JCO at this point has — as of about an hour ago — there have been a few releases made today. Stream support is currently landing in JCO transpile. There are tests, and you can get an idea of what it looks like to use a stream. If you have a stream of u32s or s32s — basically signed or unsigned 32-bit integers — here's a function getStreamU32. This is actually an export of a Rust component that's in JCO as well. We're calling this export — it is an async export, so we await it — and then we use this checkStreamValues test helper, which awaits stream.next repeatedly.
Victor Adossi 20:06
So in the host JS side, it is passing these three values in a list through to the Rust component. Let me show you the Rust component — this is what it looks like to send those values. You spawn off an async task and then use a stream of the right end of a stream to write all the values out. Going back to the test from the host side, the caller side — we generate some values, call the component's async export with those values; the component just takes those values and returns a stream of those same values. We use checkStreamValues to call stream.next repeatedly and await that value. Here's an example that isn't hidden behind the helper — this test does it with 32-bit floating point and 64-bit floating point. The logic is the same — we pull a value from the stream, then we check that they're the same. The reason this is a "close to" is IEEE-754 floating-point reasons.
One thing maybe interesting to note: while we do intend to go with the ReadableStream and WritableStream that are generally available on most JS platforms, right now we've gone with a minimal async-iterator interface. There are discussions and some people like or dislike ReadableStream and WritableStream because they can be somewhat complex. Likely we're going to shoot for supporting both — a minimal object with next, or with write and writeAll, versus taking on the full complexity of ReadableStream/WritableStream — and also having the ability for streams when they come out of a component to be ReadableStream/WritableStream and fit in to look like other JS platforms, which is pretty important as well.
What we have left: going through the rest of these tests. There are more tests on more lifting/lowering that I want to get right before we can fully ship everything. And of course, futures. Futures are like streams but they send one value. They're not exactly the same as streams with one value, but conceptually similar. Streams will get done, futures will get done, and then the P3 implementation is actually already there — we just hook those together. People should be free to start using P3 in JCO-based hosts.
Speaking of P3 and JCO-based hosts — or, well, not quite JCO-based hosts but JS-based hosts — one of the members of the community did some pretty impressive work and actually vibe-coded an implementation of a WebAssembly JS host. Essentially the same work as JCO transpile does, except completely written in JavaScript, and written in TypeScript first, which has been a goal of JCO for a while and is fantastic. This code is vibe-coded but is very high quality. So for those skeptical of large language models — Jelle, I think I'm pronouncing his name right, has done fantastic work here and importantly passes many tests. The test pass was not complete last I checked, but he's passed a lot of upstream tests. After looking at the code, I feel very confident in the implementation.
This codebase actually implements a fully JavaScript parser and visitor — something that walks through the WebAssembly binary and does the calls, reads the intrinsics and the canonical ops and functions that make P3 work, generates the code it needs, and does work very similar — essentially the same — as what jco bindgen does today. It's a great codebase to look at if you want to understand P3 and see an implementation that is, for the most part, very easy to read. For example, if we look at stream — you've got an idea of what it takes to write to the host in a P3 stream context. A lot of spec functions are implemented very similar to how the spec is written, and of course there are types everywhere which makes things a lot easier to read than what jco bindgen — the Rust-based version — currently looks like. It's well documented. Some comments are slightly off, but you'll also recognize a lot if you've looked at the JCO implementation — this is based off of that and the upstream Wasmtime code as well. When the LLM wrote this, this was all in the context, so a lot of similar patterns are there.
One of the good things about this in the future: it's going to enable JCO to also be multi-runtime. I'd love to integrate this as a backend for JCO so we can choose between the Rust implementation and this TypeScript-first JS host implementation.
Jeremy Fleitz 28:49
That last comment — you said you can choose between either a Rust-based or a TypeScript-based host. Thoughts on which one would be more performant?
Victor Adossi 29:04
When I say Rust-based versus TypeScript-based, it's really the generation of the code — the glue code. Theoretically Rust will be faster, but the project is js-component-bindgen and js-component-bindgen is actually compiled to WebAssembly before it's run in JCO to produce the bindings. There's a bit of a "hurts" thing going on there because you need to transpile js-component-bindgen before you can use js-component-bindgen as a part of JCO transpile. Theoretically, if it was just Rust — if we were doing N-API or napi-rs, if we were wiring things together that way — we could be more sure it would be faster. But there's this translation layer to Wasm. So it's actually going to Wasm first, which is close to native speed but not exactly native speed. And of course V8 and JavaScript engines are highly optimized, so they could be very close in speed. None of that matters because that is basically a compile-time concern. Unless you're doing some really hot code-building reload loop, the speed of either shouldn't be too much of a huge deal.
Jeremy Fleitz 31:07
That makes perfect sense. Totally get it.
Victor Adossi 31:12
I can't run this demo exactly because I don't have JSPI turned on on my Firefox setup here, but there is a demo — there's a demo of a Go component actually loading in the browser, which is really cool. There's a link to that in here. Let me see real quick — that did run just fine.
Victor Adossi 31:47
I'll add a link to this demo here.
Victor Adossi 32:02
This is a demo from Jelle as well — Go running in the browser using this JavaScript-based host to run the Go component. He has a fork of Go that is implemented with P3, and you can build and run this Go code on the left in the browser. The output of the component when it's run is shown on the right, which is pretty impressive. It's fast — my computer is a little older so it takes about four seconds. I've seen times as fast as one second, actually, on this — and a lot of it's cacheable as well. One of the cool things — you just get the component Wasm right out — that's a Go Wasm binary. You could take it out and run it in Wasmtime or any other place you can run Wasm.
Jeremy Fleitz 33:31
Anything else? If not — happy wasmCloud Wednesday. We'll be hosting this meeting next Wednesday from Amsterdam. We will see you all then. Have a great day.