Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems ideal for places you use Lambda as glue, like for routing requests or authorization policy decisions. But for application that doing a lot of thinking, keep in mind v8 jitless is about 3x faster than QuickJS, and with JIT it’s about 30x faster than quickjs. I’m curious to see where the break-even point is for various workloads comparing the two, and also numbers versus Bun, which starts quicker than Node but has comparable top-end JIT performance.


One good thing Lambda provides is contractual guarantee, if you want 100000 code spun up you can put it in writing and hold AWS against it

but then you realize you can achieve the same thing at 98% discount with traditional "monolith" setups with a load balancer soaking up all the requests

Serverless is a major failure


This is only true for certain workloads. Specifically, workloads that you can put behind a load balancer.

One huge advantage of FaaS infrastructure like Lambda is when your workload doesn't need to process >1 rps. Lambda, for example, has tight and unique integration with foundational AWS services like S3, SNS, SQS, and EventBridge.

Lambda is awesome when you need small scale reliability. I shouldn't have to run a VM 24x7 if I don't need to!

Running 1000's of EC2's at scale? Want to handle instance health notifications?

My favorite use of Lambda is S3 object event notifications, it makes it very easy to handle a variety of odd jobs by decoupling storage and compute and managing each independently. S3 Events are exactly the right use case for LLRT.


Yeah, that was what I was alluding to as "aws glue"; for something like S3 object event notifications or IAM policy checks this kind of light weight runtime seems ideal.


Serverless has two things that I really like about it:

1. Scale to zero, which allows me to do things like automatically deploy a full environment for each branch in a project, with a full infra, frontend, backend, database, etc.

2. Good integration with IaC tools, so that I can define my infra and my runners in a single language/tool, be it terraform/cdk or something else. Most "monolithic" setups split configuring the infra and what runs in it into two tools/steps (please let me know of ones that don't!).

But if I actually run a application for a long time with somewhat consistent load there are always cheaper, more performant and flexible solutions than a serverless setup.


That’s a depends situation. If you’re using direct service integration instead of Lambda, it can be cheaper. And, if you use VMs only as the front-end to push batches of requests to serverless compute you get the best of both worlds and the cheapest by a long shot. Not everything can fit in that model, but the things that do are magically inexpensive.

I’m actually working on a product to make that architecture more approachable (and cheaper yet). I’d be happy to hear from folks running network services on VMs and wishing there was a better way.


Do you mean that you can get actual scale-to-zero using that approach? I think the cost of the VM would not scale down like that, right?


That’s true, the front-facing VMs always need to be there (hence the product I’m working on — so at least you don’t need to run them). The real work is done in serverless, in batches. Basically I think the world needs something like API Gateway for UDP and custom TCP, but that targets serverless backends rather than VMs.


We've found it really nice for worker queues for specific workloads that fire infrequently'ish.


> Serverless is a major failure

Sure, if you try to shove everything into that mold it is, but there are absolutely times where it's the best tool for the job. I run my entire, personal, company on top of it and it saves time, money, and energy.


How does it save money, can you give us an example? Thanks.


Sure, I pay less than a dollar most months then in my busy months I’ll pay a max of ~$20 for my backend.

It’s event software, like for a food festival. Most of the months there is almost 0 traffic, for 1-3 months there is lower traffic as pre-sales start with random spikes when the event organizers advertise, then for 1-5 days there is high traffic peaking on a single day or two during the event.

Because the load is incredibly spiky it’s nice to let Lambda scale automatically for me. Trying to scale a server, even bare metal, manually would be a pain in the ass. Even as I run this software for multiple events it’s cheaper than $5/mo and a $5 instance wouldn’t be able to handle the load at peak.

If my lambda bill ever got higher than an average of $20/mo I might consider a bare metal or VM server but honestly I prefer not thinking about it. And the profit dwarfs the server costs, like it’s not even close. The lambda server costs, even if they were 50x, would still be minor.


The Cloud(tm) experience


I'm guessing for HTTP the difference in performance is negligible. QuickJS is like 50x smaller than V8 so the gains in startup time and memory usage will have a deeper impact for this use case.

https://bellard.org/quickjs/bench.html


Faster what? Runtime performance or startup performance? I bet QuickJS would be the fastest to startup from 0 (no JIT = no warmup)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: