Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The benchmark shows Node.js taking 1.5s(!) to start vs. LLRT taking under 100ms.

But the benchmark appears to be a very small program that imports the AWS SDK.

Later on in the readme, it's mentioned that LLRT embeds the AWS SDK into the binary.

So... How much of what the benchmark is showing here is just the fact that the embedded SDK loads much faster than application code?

I mean, it's certainly a valid optimization for Lambdas that are primarily making AWS API calls. But it'd also be interesting to know how much faster LLRT is at loading arbitrary code. As well as interesting to know if the embedding mechanism could be made directly available to applications to make their own code load faster.

(Disclosure: I work on Cloudflare Workers, a competitor. But, honestly interested in this.)



I am also a bit surprised they chose a bechmark that includes network latency as well. Also the 1.5s NodeJS cold-start seems quite high and is not what I would expect at all. Especially when looking at https://maxday.github.io/lambda-perf/

The SDKs are "bundled" in all lambda runtimes but normally not into the binary, what additional performance would that bring?


> The SDKs are "bundled" in all lambda runtimes but normally not into the binary, what additional performance would that bring?

I haven't looked at LLRT's internals, but if I were them, and I were bundling some JavaScript code into a binary and seeking to really optimize startup time, I would probably pre-parse the JavaScript text to produce QuickJS bytecode (or whatever data structures QuickJS actually interprets at runtime; no modern interpreter is actually processing raw text as it goes). In the best case, embedding something like that into the binary could mean that startup processing of the embedded code is O(1) (just like how startup time for a native-code binary is independent of its size, as long as it doesn't have global constructors).


From all that I have read, file-size is a major part of cold-starts as your zip needs to be fetched internally first so there is some app size where increasing the files by pre-parsing might make it slower and only after a certain size would it make sense. Edit: Sorry, you were talking about the runtime, not the app bundle, just realized this now. If it‘s in the runtime it probably is free as that will probably be available in all lambda hosts I guess. Lambda already has an even more aggressive optimization than just pre-parsing, which they call SnapStart; where it takes a RAM snapshot after init and loads that afterwards but I think it‘s inly available in some regions and some runtimes like Java with very slow cold start times.


Even faster would be just having FFI, so you could skip Rust <> C <> JS conversion every time you call a function.

This is what hermes is currently doing.


True but probably very tedious to achieve full compatibility with the existing JS API that way, which seems like it was a big goal here.


Not really. I have built a fetch-like API for hermes on top of libcurl using generated FFI-bindings (they have a bindgen tool in the repo). All I had to do is just write the JS-wrapper that would call the underlying C-lib and return a Promise.

It feels wild, people have been calling UI-libs from JS as well: https://twitter.com/tmikov/status/1720103356738474060

I can imagine someone making a libuv binding and then just recreating Node.JS with the FFI


I mean that the AWS SDK is a large API surface, they'd potentially have to re-implement all of it in Rust and then keep up with future changes, otherwise you have to tell people it's not 100% compatible.

I'm told this library is multiple megabytes of JavaScript. (That's why Node takes so long to load it...)


This benchmark is really an esoteric performance comparison, because the AWS SDK isn’t monolithic anymore and you can only include the packages you need (assuming you aren’t using a minifier and tree shaker to reduce the parsed code anyway). It was only a monolithic library back in v1 and they’re on v3 now. For most use cases, you only need to load a small subset of the AWS SDK. I have certainly written node lambdas with only a few hundred ms coldstarts using multiple aws SDK libraries bundled in (secrets manager, dynamo, s3, parameter store…)


AWS generates all their SDKs from something like Smithy. Not to say that that doesn’t have its own problems, but given that they have an existing Rust SDK, it’s not an insoluble problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: