This lets you call node binaries from Rust code using a pool of workers. This is useful for project that are mostly coded in Rust
but need to leverage Node packages for some tasks, like using Typescript's API or performing SSR with a JS framework.
Using a pool of workers is fundamental to avoid the cost of booting node binaries on multiple calls. Medium to big Node binaries can take about a second to bot (or more) depending on their imports, and using a pool of long-lived processes save you time on subsequent runs. If throughout the usage of your program you need to interact with a node binary multiple times, a reusable and long-lived process will save you a LOT of time.
This solution differs from calling rust within Node like you'd do with solutions like napi-rs
. If most of your code is written in Rust, the additional overhead of creating and maintaining node addons won't be worth it.
The pool spawns a long-lived node process and communicate with it using stdin/stdout, which makes this solution cross-platform.
To communicate with it, node-workers
provides a bridge that needs to be used when creating the node binary to interact with:
// worker.js
const { bridge } = require('rust-node-workers');
bridge({
ping: (payload) => {
console.log(`pong at ${new Date()}`);
return payload * 2;
}
});
Then this worker can be tasked from your Rust code:
use node_workers::{WorkerPool};
let mut pool = WorkerPool::setup("worker", 4); // 4 max workers
let result = pool.perform::<u32, _>("ping", vec![100]).unwrap();
println!("result: {:?}", result);
The npm package need to be installed to setup the bridge:
yarn add rust-node-workers
In your rust project:
[dependencies]
node-workers = "0.8.1"
This crate exposes a WorkerPool
you can instantiate with the length of the pool. When a task needs to be performed, a new worker will be created if needed, up to the maximum amount.
let pool = WorkerPool::setup("examples/worker", 4); // 4 max workers
Then, you can call tasks from your worker using run_worker
or perform
.
run_worker
performs a task on a worker in a new thread. Using get_result
on the thread will wait for the worker to finish and deserialize the result if there is any.
let mut pool = WorkerPool::setup("examples/worker", 2);
pool.run_worker("fib", 80u32); // on a separate thread
let thread = pool.run_worker("fib2", 40u32);
// join the thread's handle
let result = thread.get_result::<u32>().unwrap();
println!("run_worker result: {:?}", result);
perform
takes an array of data to process, and run a worker for each of its value.
let files = /* vector of TypeScript files */;
// execute the command "getInterfaces" on every file
// each executed worker will return an array of interfaces (Vec<Interface>)
let interfaces = pool
.perform::<Vec<Interface>, _>("getInterfaces", files)
.unwrap();
// it may be benefic to send multiple files to each worker instead of just one
let file_chunks = files.chunks(30);
let interfaces = pool
.perform::<Vec<Interface>, _>("getInterfacesBulk", file_chunks)
.unwrap();
You can use EmptyPayload
for tasks that doesn't need any payload.
pool.run_worker("ping", EmptyPayload::new());
For additional usage, checkout the documentation as well as the examples in the repo.
Building:
yarn build
Running examples:
cargo run --example
git tag vx.y.z
- Adjust package.json and crate version
git cliff --output CHANGELOG.md
git commit -m "chore: release" && git push --follow-tags
npm publish --access=public