290 points by zahlekhan 2 days ago | 199 comments | View on ycombinator
rented_mule 2 days ago |
blundergoat 2 days ago |
simonbw 1 day ago |
You still do get some latency from the event loop, because postMessage gets queued as a MacroTask, which is probably on the order of 10μs. But this is the price you have to pay if you want to run some code in a non-blocking way.
nine_k 2 days ago |
So this holds even for L = M. The speedup is not in the language, but in the rewriting and rethinking.
evmar 2 days ago |
spankalee 2 days ago |
This new company chose a very confusing name that has been used by the Open UI W3C Community Group for over 5 years.
Open UI is the standards group responsible for HTML having popovers, customizable select, invoker commands, and accordions. They're doing great work.
moomin 1 day ago |
Looks inside
“The old implementation had some really inappropriate choices.”
Every time.
diablevv 1 day ago |
For a parser specifically, you're probably spending a lot of time creating and discarding small AST nodes. That's exactly the kind of workload where V8's generational GC shines and where WASM's manual memory management becomes a liability rather than an asset.
The interesting question is whether this scales. A parser that runs on small inputs in a browser is a very different beast from one processing multi-megabyte files in a tight loop. At some point the WASM version probably wins - the question is whether that workload actually exists in your product.
joaohaas 2 days ago |
That final summary benchmark means nothing. It mentions 'baseline' value for the 'Full-stream total' for the rust implementation, and then says the `serde-wasm-bindgen` is '+9-29% slower', but it never gives us the baseline value, because clearly the only benchmark it did against the Rust codebase was the per-call one.
Then it mentions: "End result: 2.2-4.6x faster per call and 2.6-3.3x lower total streaming cost."
But the "2.6-3.3x" is by their own definition a comparison against the naive TS implementation.
I really think the guy just prompted claude to "get this shit fast and then publish a blog post".
pjmlp 1 day ago |
Additionally even after those options are exhausted, only a key parts might need a rewrite, not the whole thing.
However, I wonder how many care about actually learning about algorithms, data structures and mechanical sympathy in the age of Electron apps.
It feels quite often that a rewrite is chosen, because knowing how to actually apply those skills is the CS stuff many think isn't worthwhile learning about.
vmsp 2 days ago |
jeremyjh 2 days ago |
> converts internal AST into the public OutputNode format consumed by the React renderer
Why not just have the LLM emit the JSON for OutputNode ? Why is a custom "language" and parser needed at all? And yes, there is a cost for marshaling data, so you should avoid doing it where possible, and do it in large chunks when its not possible to avoid. This is not an unknown phenomenon.
athrowaway3z 1 day ago |
gavinray 1 day ago |
AFAIK, you can create a shared memory block between WASM <-> JS:
https://developer.mozilla.org/en-US/docs/WebAssembly/Referen...
Then you'd only need to parse the SharedArrayBuffer at the end on the JS side
envguard 2 days ago |
horacemorace 1 day ago |
slopinthebag 2 days ago |
nallana 2 days ago |
undefined 1 day ago |
jesse__ 1 day ago |
ivanjermakov 2 days ago |
Dwedit 1 day ago |
Anyway, Javascript is no stranger to breaking changes. Compare Chromium 47 to today. Just add actual integers as another breaking change, then WASM becomes almost unnecessary.
mohsen1 1 day ago |
It was able to beat XZ on its own game by a good margin:
sakesun 1 day ago |
bulbar 1 day ago |
I didn't mind reading articles that are not about how Rust is great in theory (and maybe practice).
mwcampbell 1 day ago |
gettingoverit 1 day ago |
In their worst case it was just x5. We clearly have some progress here.
LunaSea 1 day ago |
dmix 2 days ago |
Claude tells me this is https://www.fumadocs.dev/
nssnsjsjsjs 2 days ago |
owenpalmer 2 days ago |
undefined 1 day ago |
kennykartman 2 days ago |
caderosche 2 days ago |
marcosdumay 2 days ago |
fHr 1 day ago |
bluelightning2k 1 day ago |
Not sold about the fundamental idea of OpenUI though. XML is a great fit for DSLs and UI snippets.
rpodraza 1 day ago |
szmarczak 2 days ago |
So you're reinventing JSON but binary? V8 JSON nowadays is highly optimized [1] and can process gigabytes per second [2], I doubt it is a bottleneck here.
[1] https://v8.dev/blog/json-stringify [2] https://github.com/simdjson/simdjson
shevy-java 1 day ago |
Rust.
WASM.
TypeScript.
I am slowly beginning to understand why WASM did not really succeed.
measurablefunc 1 day ago |
Yanko_11 1 day ago |
Yanko_11 1 day ago |
ryguz 1 day ago |
abitabovebytes 1 day ago |
ata-sesli 1 day ago |
arthurjean 1 day ago |
dualblocksgame 2 days ago |
wangnaihe 1 day ago |
patapim 2 days ago |
derodero24 1 day ago |
aimarketintel 2 days ago |
leontloveless 1 day ago |
SCLeo 2 days ago |
DaleBiagio 2 days ago |
ConanRus 2 days ago |
slowhadoken 2 days ago |
neuropacabra 2 days ago |
The port had been done in a weekend just to see if we could use Python in production. The C++ code had taken a few months to write. The port was pretty direct, function for function. It was even line for line where language and library differences didn't offer an easier way.
A couple of us worked together for a day to find the reason for the speedup. Just looking at the code didn't give us any clues, so we started profiling both versions. We found out that the port had accidentally fixed a previously unknown bug in some code that built and compared cache keys. After identifying the small misbehaving function, we had to study the C++ code pretty hard to even understand what the problem was. I don't remember the exact nature of the bug, but I do remember thinking that particular type of bug would be hard to express in Python, and that's exactly why it was accidentally fixed.
We immediately started moving the rest of our back end to Python. Most things were slower, but not by much because most of our back end was i/o bound. We soon found out that we could make algorithmic improvements so much more quickly, so a lot of the slowest things got a lot faster than they had ever been. And, most importantly, we (the software developers) got quite a bit faster.