67 points by goodoldneon 5 days ago | 30 comments | View on ycombinator
socketcluster about 16 hours ago |
baublet about 16 hours ago |
e.g., this certainly helps when the event loop is blocked, but so could FFI calls to another language for the CPU bound work. I’d only reach for a new Node thread if these didn’t pan out, because there’s usually a LOT that goes into spinning up a new node process in a container (isolating the data, making sure any bundlers and transpilers are working, making sure the worker doesn’t pull in all the app code, etc.).
Side car processes aren’t free, either. Now your processes are contending for the same pool of resources and can’t share anything, which IME means more likelihood of memory issues, esp if there isn’t anything limiting the workers your app can spawn.
Still, good article! Love seeing the ways people tackle CPU bound work loads in an otherwise I/O bound Node app.
kketch about 15 hours ago |
Having a separate isolate in each threads spawned with the worker threads with a minimal footprint of 10MB does not seem like a high price to pay. It's not like you're going to spawn hundreds of them anyway is it? You will very likely spawn less or as much threads as your CPU cores can handle concurrently. You typically don't run a hundred of threads (OS threads) you use a thread pool and you cap the concurrency by setting a limit of maximum threads to spawn.
This is also how goroutines work under the hood, they are "green threads", an abstraction that operate on top of a much small OS thread pool.
Worker threads have constraints but most of them are intentional, and in many cases desirable.
I’d also add that SharedArrayBuffer doesn’t limit you to “shared counters or coordination primitives”. It’s just raw memory, you could store structured data in it using your own memory layout. There are libraries out there that implement higher-level data structures this way already
esprehn about 15 hours ago |
https://github.com/tc39/proposal-module-declarations
Unfortunately the JS standards folks have refused so far to make this situation better.
Ex. it should just be `new Worker(module { ... })`.
chrisweekly about 16 hours ago |
1. https://docs.platformatic.dev/docs/overview/architecture-ove...
Jcampuzano2 about 16 hours ago |
Almost none of them treat these consistently (if they consider these at all) and all require you to work around them in strange ways.
It feels like there is a lot they could help with in the web world, especially in complex UI and moving computation off the main thread but they are just so clunky to use that almost nobody tries to work around it.
The ironic part is if bundlers, transpilers, compilers etc. weren't used at all they would probably have much more widespread use.
vilequeef about 17 hours ago |
And you can make it thread-like if you prefer by creating a “load balancer” setup to begin with to keep them CPU bound.
require('os').cpus().length
Spawn a process for each CPU, bind data you need, and it can feel like multithreading from your perspective.More here https://github.com/bennyschmidt/simple-node-multiprocess
ptrwis about 16 hours ago |
derodero24 about 17 hours ago |
groundzeros2015 about 16 hours ago |
goldenarm about 15 hours ago |
cpursley about 14 hours ago |
robutsume about 14 hours ago |
algolint about 13 hours ago |
You can just set up a separate child process for that. The main event loop which handles connections should just co-ordinate and delegate work to other programs and processes. It can await for them to complete asynchronously; that way the event loop is not blocked.
I recall people have been able to get up to around a million (idle) WebSocket connections handled by a single process.
I was able to comfortably get 20k concurrent sockets per process each churning out 1 outbound message every 3 to 5 seconds (randomized to spread out the load).
It is a good thing that Node.js forces developers to think about this because most other engines which try to hide this complexity tend to impose a significant hidden cost on the server in the form of context switching... With Node.js, there is no such cost, your process can basically have a whole CPU core for itself and it can orchestrate other processes in a maximally efficient way if you write your code correctly... Which Node.js makes very easy to do. Spawning child processes and communicating with them in Node.js is a breeze.