Hacker Newsnew | past | comments | ask | show | jobs | submit | neverminder's commentslogin

You can argue about Hetzner's uptime, but you can 't argue about Hetzner's pricing which is hands down the best there is. I'd rather go with Hetzner and cobble up together some failover than pay AWS extortion.


For the price of AWS you could run Hetzner, a second provider for resiliancy and still make a large saving.

Your margin is my opportunity indeed.


I switched to netcup for even cheaper private vps for personal noncritical hosting. I'd heard of netcup being less reliable but so far 4 months+ uptime and no problems. Europe region.

Hetzner has the better web interface and supposedly better uptime, but I've had no problems with either. Web interface not necessary at all either when using only ssh and paying directly.


I am on Hetzner with a primary + backup server and on Netcup (Vienna) with a secondary. For DNS I am using ClouDNS.

I think I am more distributed then most of the AWS folks and it still is way cheaper.


I used netcup for 3 years straight for some self hosting and never noticed an outage. I was even tracking it with smokeping so if the box disappeared I would see it but all of the down time was mine when I rebooted for updates. I don't know how they do it but I found them rock solid.


I've been running my self-hosting stuff on Netcup for 5+ years and I don't remember any outages. There probably were some, but they were not significant enough for me to remember.


netcup is fine unless you have to deal with their support, which is nonexistent. Never had any uptime issues in the two years I've been using them, but friends had issues. Somewhat hit or miss I suppose.


Exactly. Hetzner is the equivalent of the original Raspberry Pi. It might not have all fancy features but it delivers and for the price that essentially unblocks you and allows you to do things you wouldn't be able to do otherwise.


They've been working pretty hard on those extra features. Their load balancing across locations is pretty decent for example.


> I'd rather go with Hetzner and cobble up together some failover than pay AWS extortion.

Comments like this are so exaggerated that they risk moving the goodwill needle back to where it was before. Hetzner offers no service that is similar to DynamoDB, IAM or Lambda. If you are going to praise Hetzner as a valid alternative during a DynamoDB outage caused by DNS configuration, you would need to a) argue that Hetzner is a better option regarding DNS outages, b) Hetzner is a preferable option for those who use serverless offers.

I say this as a long-time Hetzner user. Herzner is indeed cheaper, but don't pretend that Herzner let's you click your way into a highly-availale nosql data store. You need non-trivial levels of you're ow work to develop, deploy, and maintain such a service.


> but don't pretend that Herzner let's you click your way into a highly-availale nosql data store.

The idea you can click your way to a highly available, production configured anything in AWS - especially involving Dynamo, IAM and Lambda - is something I've only heard from people who've done AWS quickstarts but never run anything at scale in AWS.

Of course nobody else offers AWS products, but people use AWS for their solutions to compute problems and it can be easy to forget virtually all other providers offer solutions to all the same problems.


>The idea you can click your way to a highly available, production configured anything in AWS - especially involving Dynamo, IAM and Lambda

With some services I'd agree with you, but DynamoDB and Lambda are easily two of their 'simplest' to configure and understand services, and two of the ones that scale the easiest. IAM roles can be decently complicated, but that's really up to the user. If it's just 'let the Lambda talk to the table' it's simple enough.

S3/SQS/Lambda/DynamoDB are the services that I'd consider the 'barebones' of the cloud. If you don't have all those, you're not a cloud provider, your just another server vendor.


> Lambda are easily two of their 'simplest'

Not if you want to build something production ready. Even a simple thing like say static IP ingress for the Lambda is very complicated. The only AWS way you can do this is by using Global Accelerator -> Application Load Balancer -> VPC Endpoint -> API Gateway -> Lambda !!.

There are so many limits for everything that is very hard to run production workloads without painful time wasted in re-architecture around them and the support teams are close to useless to raise any limits.

Just in the last few months, I have hit limits on CloudFormation stack size, ALB rules, API gateway custom domains, Parameter Store size limits and on and on.

That is not even touching on the laughably basic tooling both SAM and CDK provides for local development if you want to work with Lambda.

Sure Firecracker is great, and the cold starts are not bad, and there isn't anybody even close on the cloud. Azure functions is unspeakably horrible, Cloud Run is just meh. Most Open Source stacks are either super complex like knative or just quite hard to get the same cold start performance.

We are stuck with AWS Lambda with nothing better yes, but oh so many times I have come close to just giving up and migrate to knative despite the complexity and performance hit.


>Not if you want to build something production ready.

>>Gives a specific edge case about static IPs and doing a serverless API backed by lambda.

The most naive solution you'd do on any non-cloud vendor, just have a proxy with a static ip that then routes traffic whereever it needed to go, would also work on AWS.

So if you think AWS's solution sucks why not just go with that? What you described doesn't even sound complicated when you think of the networking magic behind the scenes that will take place if you ever do scale to 1 million tps.


> Production ready

Don’t know what you think should mean but for me that means

1. Declarative IaaC in either in CF/terraform

2. Fully Automated discovery which can achieve RTO/RPO objectives

3. Be able to Blue/Green and % or other rollouts

Sure I can write ansible scripts, have custom EC2 images run HA proxy and multiple nginx load balancers in HA as you suggest, or host all that to EKS or a dozen other “easier” solutions

At the point why bother with Lambda ? What is the point of being cloud native and serverless if you have to literally put few VMs/pod in front and handle all traffic ? Might as well host the app runtime too .

> doesn’t even sound complicated .

Because you need a full time resource who is AWS architect and keeps up with release notes and documentation or training and constantly works to scale your application - because every single component has a dozen quotas /limits and you will hit them - it is complicated.

If you spend few million a year on AWS then spending 300k on an engineer to do just do AWS is perhaps feasible .

If you spend few hundred thousands on AWS as part of mix of workloads it is not easy or simple.

The engineering of AWS impressive as it maybe has nothing to the products being offered . There is a reason why Pulumi, SST or AWS SAM itself exist .

Sadly SAM is so limited I had to rewrite everything to CDK in couple of months . CDK is better but I am finding that I have to monkey patching limits on CDK with the SDK code now, while possible , the SDK code will not generate Cloudformation templates .


> Don’t know what you think should mean but for me that means

I think your inexperience is showing, if that's what you try to mean by "production-ready". You're making a storm in a teacup over features that you automatically onboard if you go through an intro tutorial, and "production-ready" typically means way more than a basic run-of-the-mill CICD pipeline.

As most of the times, the most vocal online criticism comes from those who have the least knowledge and experience over the topic they are railing against, and their complains mainly boil down to criticising their own inexperience and ignorance. There is plenty of things to criticize AWS for, such as cost and vendor lock-in, but being unable and unwilling to learn how to use basic services is not it.


> Even a simple thing like say static IP ingress for the Lambda is very complicated.

Explain exactly what scenario you believe requires you to provide a lambda behind a static IP.

In the meantime, I recommend you learn how to invoke a lambda, because static IPs is something that is extremely hard to justify.


Try telling that to customers who can only do outbound API calls to whitelisted IP addresses

When you are working with enterprise customers or integration partners it doesn’t even have to be regulated sectors like finance or healthcare, these are basic asks you cannot get away from .

people want to be able to know whitelist your egress and ingress IPs or pin certificates. It is not up to me to say on efficacy of these rules .

I don’t make the rules of the infosec world , I just follow them.


> Try telling that to customers who can only do outbound API calls to whitelisted IP addresses

Alright, if that's what you're going with then you can just follow a AWS tutorial:

https://docs.aws.amazon.com/lambda/latest/dg/configuration-v...

Provision an elastic IP to have your static IP address, set the NAT gateway to handle traffic, and plugin the lambda to the NAT gateway.

Do you think this qualifies as very complicated?


This architecture[1] requires the setup of 2 NAT gateways (one in each AZ), a routing table, an Internet Gateway, 2 Elastic IP and also the VPC. Since as before we cannot use Function URLs for Lambda we will still need the API Gateway to make HTTP calls.

The only parts we are swapping out `GA -> ALB -> VPC` for `IG -> Router -> NAT -> VPC`.

Is it any simpler ? Doesn't seem like it is to me.

Going the NAT route means, you also need to have intermediate networking skills to handle a routing table (albeit a simple one), half the developers of today never used IP tables is or what chaining rules is.

---

I am surprised at the amount of pushback on a simple point which should be painfully obvious.

AWS (Azure/GCP are no different) has become overly complex with no first class support for higher order abstractions and framework efforts like SAM or even CDK seem to getting not much love at all in last 4-5 years.

Just because they offer and sell all these components to be independently, doesn't mean they should not invest and provide higher order abstractions for people with neither bandwidth or the luxury to be a full time "Cloud Architect".

There is a reason why today Vercel, Render or Railway others are popular despite mostly sitting on top of AWS.

On Vercel the same feature would be[1] quite simple. They use the exact solution you suggest on top of AWS NAT gateway, but the difference I don't have to know or manage it, is the large professional engineering team with networking experience at Vercel.

There is no reason AWS could not have built Vercel like features on top of their offerings or do so now.

At some point small to midsize developers will avoid direct AWS by either choosing to setup Hetzner/OVH bare machines or with bit more budget colo with Oxide[3] or more likely just stick to Vercel and Railway kind of platforms.

I don't know how that will impact AWS, we will all still use them, however a ton of small customers paying close to rack rate is definitely much much higher margin than what Vercel is paying AWS for the same workload is going to be.

--

[1] https://docs.aws.amazon.com/prescriptive-guidance/latest/pat...

[2] this https://vercel.com/docs/connectivity/static-ips

[3] Would be rare, obviously only if they have the skill experience to do so.


> With some services I'd agree with you, but DynamoDB and Lambda are easily two of their 'simplest' to configure and understand services, and two of the ones that scale the easiest. IAM roles can be decently complicated, but that's really up to the user. If it's just 'let the Lambda talk to the table' it's simple enough.

We agree, but also, I feel like you're missing my point: "let the Lambda talk to the table" is what quickstarts produce. To make a lambda talk to a table at scale in production, you'll want to setup your alerting and monitoring to notify you when you're getting close to your service limits.

If you're not hitting service limits/quotas, you're not running even close to running at scale.


> The idea you can click your way to a highly available, production configured anything in AWS - especially involving Dynamo, IAM and Lambda - is something I've only heard from people who've done AWS quickstarts but never run anything at scale in AWS.

I'll bite. Explain exactly what work you think you need to do to get your pick of service running on Hetnzer to have equivalent fault-tolerance to, say, a DynamoDB Global Table created with the defaults.


Are you Netflix? Because is not theres a 99% probability you dont need any of those AWS services and just have a severe case of shiny object syndrome in your organisation.

Plenty of heavy traffic, high redundancy applications exist without the need for AWS (or any other cloud providers) overpriced "bespoke" systems.


To be honest I don't trust myself running a HA PostgreSQL setup with correct backups without spending an exorbitant effort to investigate everything (weeks/months) - do you ? I'm not even sure what effort that would take. I can't remember last time I worked with unmanaged DB in prod where I did not have a dedicated DBA/sysadmin. And I've been doing this for 15 years now. AFAIK Hetzner offers no managed database solution. I know they offer some load balancer so there's that at least.

At some point in the scaling journey bare metal might be the right choice, but I get the feeling a lot of people here trivialize it.


If youre not Netflix then just sudo yum install postgresql and pg_dump every day, upload to S3. Has worked for me for 20 years at various companies, side projects, startups …


> If youre not Netflix then just sudo yum install postgresql and pg_dump every day, upload to S3.

database services such as DynamoDB support a few backup strategies out of the box, including continuous backups. You just need to flips switch and never bother about it again.

> Has worked for me for 20 years at various companies, side projects, startups …

That's perfectly fine. There are still developers who don't even use version control at all. Some old habits die hard, even when the whole world moved on.


What happens when the server goes down ? How do you update it ?


you stand up another db server and load the last good dump into it i suppose


If it requires weeks/months to sort setting that up and backups then you need a news ops person as that's insane.

If you're doing it yourself, learn Ansible, you'll do it once and be set forever.

You do not need "managed" database services. A managed database is no different from apt install postgesql followed by a scheduled backup.

It genuinely is trivial, people seem to have this impression theres some sort of unique special sauce going on at AWS when there really isn't.


That doesn’t give you high availability; it doesn’t give you monitoring and alerting; it doesn’t give you hardware failure detection and replacement; it doesn’t solve access control or networking…

Managed databases are a lot more than apt install postgresql.


If you're doing it yourself, learn Ansible, you'll do it once and be set forever.

You do not need "managed" database services. A managed database is no different from apt install postgesql followed by a scheduled backup.

Genuinely no disrespect, but these statements really make it seem like you have limited experience building an HA scalable system. And no, you don't need to be Netflix or Amazon to build software at scale, or require high availability.


Backups with wall-g and recurring pg_dump are indeed trivial. (Modulo S3 outage taking so long that your WAL files fill up the disk and you corrupt the entire database.)

It's the HA part, especially with a high-volume DB that's challenging.


But that's the thing - if I have an ops guy who can cover this then sure it makes sense - but who does at an early stage ? As a semi competent dev I can setup a terraform infra and be relatively safe with RDS. I could maybe figure out how to do it on my own in some time - but I don't know what I don't know - and I don't want to do a weekend production DB outage debugging because I messed up the replication setup or something. Maybe I'm getting old but I just don't have the energy to deal with that :)


From your comment, you don't even have the faintest idea of what is the problem domain. No wonder you think you know better.


> Are you Netflix? Because is not theres a 99% probability you dont need any of those AWS services and just have a severe case of shiny object syndrome in your organisation.

I think you don't even understand the issue you are commenting on. It's irrelevant if you are Netflix or some guy playing with a tutorial. One of the key traits of serverless offerings is how it eliminates the need to manage and maintain a service or even worry about you have enough computational resources. You click a button to provision everything, you configure your clients to consume that service, and you are done.

If you stop to think about the amount of work you need to invest to even arrive at a point where you can actually point a client at a service, you'll be looking at what the value of serverless offerings.

Ironically, it's the likes of Netflix who can put together a case against using serverless offerings. They can afford to have their own teams managing their own platform services with the service levels they are willing to afford. For everyone else, unless you are in the business of managing and tuning databases or you are heavily motivated to save pocket change on a cloud provider bill, the decision process is neither that clear not favours running your own services.


> Plenty of heavy traffic, high redundancy applications exist without the need for AWS (or any other cloud providers) overpriced "bespoke" systems.

And almost all of them need a database, a load balancer, maybe some sort of cache. AWS has got you covered.

Maybe some of them need some async periodic reporting tasks. Or to store massive files or datasets and do analysis on them. Or transcode video. Or transform images. Or run another type of database for a third party piece of software. Or run a queue for something. Or capture logs or metrics.

And on and on and and on. AWS has got you covered.

This is Excel all over again. "Excel is too complex and has too many features, nobody needs more than 20% of Excel. It's just that everyone needs a different 20%".


You're right AWS does have you covered. But that doesn't mean thats the only way of doing it. Load balancing is insanely easy to do yourself, databases even easier. Caching, ditto.

I think a few people who claim to be in devops could do with learning the basics about how things like Ansible can help them as there's a fair few people who seem to be under the impression AWS is the only, and the best option, which unless you're FAANG really is rarely the case.


You can spin up a redundant database setup with backups and monitoring and automatic fail over in 10 mins (the time it takes in AWS)? And maintain it? If you've done this a few times before and have it highly automated, sure. But let's not pretend it's "even easier" than "insanely easy".

Load balancing is trivial unless you get into global multicast LBs, but AWS have you covered there too.


You could never run a site like hacker news on a single box somewhere with a backup box a couple of states away.

(/s, obviously)


And have the two fail at the same time because similarly old hardware with similarly old and used disks fails at roughly the same time :)


> You're right AWS does have you covered. But that doesn't mean thats the only way of doing it. Load balancing is insanely easy to do yourself, databases even easier. Caching, ditto

I think you don't understand the scenario you are commenting on. I'll explain why.

It's irrelevant if you believe that you are able to imagine another way to do something, and that you believe it's "insanely easy" to do those yourself. What matters is that others can do that assessment themselves, and what you are failing to understand is that when they do so, their conclusion is that the easiest way by far to deploy and maintain those services is AWS.

And it isn't even close.

You mention load balancing and caching. The likes of AWS allows you to setup a global deployment of those services with a couple of clicks. In AWS it's a basic configuration change. And if you don't want it, you just tear down everything with a couple of clicks as well.

Why do you think a third of all the internet runs on AWS? Do you think every single cloud engineer in the world is unable to exercise any form of critical thinking? Do you think there's a conspiracy out there to force AWS to rule the world?


If you need the absolutely stupid scale DynamoDB enables what is the difference compared to running for example FoundationDb on your own using Hetzner?

You will in both cases need specialized people.


> Hetzner offers no service that is similar to DynamoDB, IAM or Lambda.

The key thing you should ask yourself: do you need DynamoDB or Lambda? Like "need need" or "my resume needs Lambda".


> The key thing you should ask yourself: do you need DynamoDB or Lambda? Like "need need" or "my resume needs Lambda".

If you read the message you're replying to, you will notice that I singled out IAM, Lambda, and DynamoDB because those services were affected by the outage.

If Hetzner is pushed as a better or even relevant alternative, you need to be able to explain exactly what you are hoping to say to Lambda/IAM/DynamoDB users to convince them that they would do better if they used Hetzner instead.

Making up conspiracy theories over CVs doesn't cut it. Either you know anything about the topic and you actually are able to support this idea, or you're an eternal September admission whose only contribution is noise and memes.

What is it?


Well, Lambda scales down to 0 so I don't have to pay for the expensive EC2 instan... oh, wait!


> click your way into a HA NoSQL data store

Maybe not click, but Scylla’s install script [0] doesn’t seem overly complicated.

0: https://docs.scylladb.com/manual/stable/getting-started/inst...


TBH, in my last 3 years with Hetzner, i never saw a downtime to my servers other than myself doing some routin maitenance for os updates. Location Falkenstein.


And I have seen them delete my entire environment including my backups due to them not following their own procedures.

Sure, if you configure offsite backups you can guard against this stuff, but with anything in life, you get what you pay for.


You really need your backup procedures and failover procedures though, a friend bought a used server and the disk died fairly quickly leaving him sour.


THE disk?

It's a server! What in the world is your friend doing running a single disk???

Ate a bare minimum they should have been running a mirror.


Younger guy with ambitions but little experience, I think my point was that used servers with Hetzner are still used so if someone has been running disk heavy jobs you might want to request new disks or multiple ones and not just pick the cheapest options at the auction.

(Interesting that an anectode like above got downvoted)


> (Interesting that an anectode like above got downvoted)

experts almost unilaterally judge newbies harshly, as if the newbies should already know all of the mistakes to avoid. things like this are how you learn what mistakes to avoid.

"hindsight is 20/20" means nothing to a lot of people, unfortunately.


I do have HA setup and Backup for DB that run periodically to an S3.


What is the Hetzner equivalent for those in Windows Server land? I looked around for some VPS/DS providers that specialize in Windows, and they all seem somewhat shady with websites that look like early 2000s e-commerce.


It would be interesting to see AI lawyer arguing a case in court.


Some of the early lawyerly uses of AI have been no bueno.[0] Yet the legal need to produce knowledge from huge amounts of text is such an obvious alignment with LLMs...

[0] https://www.theguardian.com/world/2024/feb/29/canada-lawyer-... https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-... https://www.npr.org/2023/12/30/1222273745/michael-cohen-ai-f...


I think most everyone would agree that using early LLMs (and personally I'd still consider current LLMs to be early) in legal contexts is ill-advised, at best. Circle back to this question in 5 years and I think the response will be very different.


Adjusted for Elon Time. Someone actually built a converter, which is hilarious: https://elontime.io


thanks a lot!


Wouldn't want to be the guy who pushed this particular commit. It's ironic that the company that is supposed to prevent this sort of thing causes the biggest worldwide outage ever. Crowdstrike is finished. Let's hope this will result in at least a small increase in desktop Linux market share.


Just a small reminder that's it's never "the guy" and always "the process", or lack thereof.


So it's "the guy" who's job was to make and enforce "the process", got it.


Yeah, but heads will have to roll for this one, the world will be calling for blood, so who better if not "the guy"?


When the world calls for blood against your organization, it's a test of the organization's character: will they throw a scapegoat under the bus (even if there is a directly responsible person) or will they defend their staff, accept fault, and demonstratively improve process?


The answer is yes


the management that enabled the process. And follow the chain to the top, they are paid very well to own the risks


More importantly, the companies that enabled auto update from a vendor to production rather than having a validation process. This sort of issue can happen with any vendor, penalising the vendor won't help with the next time this happens.


Was there a way to not enable these channel updates? If so, would you still check all the mandatory security measures when being audited?


The way is to not install third party software with kernel level access that you can't stop pulling remote updates.

How does that pass a security audit in the first place?


It’s both. If you’re an engineer and you push out shitty code that takes down 911 systems and ambulances, you f’ed up. Push back against processes that cause harm, or have the potential to cause harm. You are ultimately responsible for your actions. No one else. The excuse of “I was just following orders” has been dead and buried since WW2.

Yeah, ideally management should know better. But management aren’t usually engineers. Even when they are, they don’t deal with the code on a day to day basis. They usually know much less about the actual processes and risks than the engineers on the ground.


if one of the people i manage is not up to the task the fault is mine. I've hired them. I should setup a system of hard gained trust and automation to avoid or at least minimize them fucking up. When fuckups happen, they are my fuckups. Critical systems don't survive only on trust, obviously. If I don't setup the teams and the systems properly, my bosses will also take the blame for having put me in that position. I'm not advocating for lower layers to avoid responsabilities. But if an head needs to roll you should look above. That said, peole are hardened by fuckups, so there are better solutions than rolling heads, usually.


Right. In one sense, what we're talking about is different ideas on how companies / teams work. There's a wonderful book called "Reinventing Organizations" by Laloux that I recommend to basically everyone. In the book, the authors lay out a series of different organisational structures which have been invented and used throughout the ages. The book talks about early tribes where the big man tells everyone what to do (eg mobsters), to rigid hierarchies + fixed roles (the church, schools) to modern corporations with a flexible hierarchy, and some organisation structures beyond that.

The question of "who is ultimately responsible" changes based on how we see the organisation. In organisations where the chief decides everything, its up to the chief to decide if they should place blame on someone or not. In a modern corporation, people at the bottom of the hierarchy are shielded from the consequences of their actions by the corporation. But there's also a weird form of infantilisation that goes along with that. We don't actually trust people on the ground to take responsibility for the work they do. All responsibility goes up the management hierarchy, along with control, power and pay. Its sort of assumed that people who haven't been promoted are too incompetent to make important choices.

I don't think thats the final form of how high functioning teams should work. Its noble that you're willing to put your head on the chopping block, but I think its also really important to give maximal agency to your employees. And that includes making people feel responsible and empowered to fix problems when they see them. You get more out of people by treating them like adults, not children. And they learn more, and I think that's usually, in the long run, better for everyone.

I agree that if a company has a bad process, employees shouldn't be fired over it. But I also think if you're an employee in a company with a bad process, you should fight to make the process better. Never let yourself be complicit in a mistake like this.


Thank you for the reading suggestion!


> It’s both. If you’re an engineer and you push out shitty code that takes down 911 systems and ambulances, you f’ed up.

This is wrong. If a company is developing that kind of software is the responsibility of the company to provide a certain level of QA before they release software. And no, it's not that "engineers are pushing out shitty code", but that the shitty company allows shitty code to be deployed in customers' machines.


Yeah, agreed. Who cares whose semicolon it was?

What matters is how this was deployed without any testing.


> The excuse of “I was just following orders” has been dead and buried since WW2.

Only for the loosers.


Many major companies have post-mortem reviews for this kind of thing. Most of the big failures we see is a mix of people being rushed, detection processes failing, a miscommunication/misunderstanding of the effects of a small change.

One analogy is rounding - one rounding makes no difference to a transaction, but multiple systems rounding the same direction can have a large scale impact. It's not always rounding money - it can be error handling. A stops at the error, B goes on, turns out they're not in sync.

Which guy is it? The person who pressed the button? The manager who gave that person more than one task that day? The people who didn't sufficiently test the detection process? The people who wrote the specs without sufficient understanding of the full impact? The person who decided to layoff the people who knew the impact three months ago?


> Crowdstrike is finished

Unlikely, just as Solarwinds wasn't finished when they distributed malware that got government agencies hacked. You underestimate the slow turning radius of giant company purchasing departments.


As I posted elsewhere, the SolarWinds stock has never recovered from its high before the hack. And it is on a downward trend.


Crowdstrike is finished? Ha!

SolarWinds got the US government hacked by the Russians and they still exist.


Interestingly SolarWinds is headquartered in Austin and CrowdStrike recently moved there too.


Why is that interesting?


Implying a geographic coincidence


The SolarWinds stock has never recovered from its high before the hack. And it is on a downward trend.


Enterprise Linuxes also employ Crowdstrike or similar "security" products as mandatory part of their IT deployments. Often (always?) this is due to companies wanting certification for their secure processes, in order to sell to government or large corporations that require them.


Crowdstrike in their official statement said "Linux and MacOS not affected". Are there any reports stating otherwise?


not affected because the bug is in the windows specific code, not because it works so much different on linux


Why the fuck didn't MSFT just do blue/green canarying? No update should be rolled out to a billion devices at once until it's baked in a million devices for a bit, and that only after baking in 10,000 devices for a bit.


CrowdStrike is not MSFT. This also affected Linux installations with CrowdStrike installed, from what I've read.


Source? I have not seen any thing about that and CS themselves say it's Windows only.


Crowdstrike broke the update for Windows only this time. Although look around, they did a bad update on Linux earlier this year (although that only broke some of the Linux installs).


Thanks, sorry, I commented before getting my facts in order. Comment still stands as applied to CrowdStrike.


> Crowdstrike is finished

Boeing is still there... we'll see


Yes indeed. That's kind of how Chernobyl happened.


> Crowdstrike is finished.

We thought about Microsoft the same way, some 15 years ago. /s


Les Paul.


But what would it take? Another crash or two? Boeing is like one of those big supertankers that take forever to change course - even if the iceberg was a kilometre away, there's nothing they could do but ram into it.


This is the only viable future for space travel. In orbit assembled space ships with nuclear thermal propulsion. Travel time to Mars with conventional chemical propulsion takes just way too long.


I can really see this happening with Malten Salt Reactors finally getting traction. China already has built a demonstrator and is now building a full-scale version of an MSR Reactor and now finally the US is building a demonstrator as well.

https://www.thecooldown.com/green-business/us-nuclear-test-r...


No, molten salt reactors are not the right technology for nuclear propulsion. The idea of a nuclear thermal rocket engine is to heat up very light molecules to very high temperatures, and so to achieve higher exhaust velocities than chemical rockets. If you plug a higher exhaust velocity in the rocket equation, you end up needing less fuel mass for the same cargo mass. In practice, the best nuclear thermal rockets achieve a lower temperature than chemical rockets, but they can dedicate it to heat only hydrogen (H2), rather than the combustion products in chemical rockets (such as H2O or CO2), so overall the exhaust velocity can be approximately twice as high.

Still, temperature is quite important, you want the core of the reactor to run as hot as possible. You are limited by the fact that you don't want the core to disintegrate. The NERVA project [1] achieved temperatures in excess of 2200 K.

Molten salt reactors are designed to reach about 1000 K. That gives up most of the benefit of using a nuclear reactor. You would still beat chemical rockets, but only by 25%, not by a factor of 2. Why would you do that? If you build on the NERVA project and use TRISO fuel (which was not available at the time) you can end up with a specific impulse of more than 1000 s, which is 2.2 times higher than what the best chemical rockets can deliver, and 2.85 times higher than SpaceX Starship.

[1] https://en.wikipedia.org/wiki/NERVA#Reactor_and_engine_test_...


There are non-nuclear alternatives, particularly inward of the asteroid belt.

PV in space can be made very thin. The absorption length for photons in CdTe, for example, is just 0.1 microns. Without having to be mechanically robust against wind and rain, great gossamer PV arrays could have very high power/mass ratios. These could drive plasma engines with high Isp.


None of that has anything to do with reducing travel times to Mars unless your entire payload is on the order of a couple of pounds.

That's like replying to someone saying it takes too long to drive from New York to Seattle, by saying that we could build an efficient 1000 mile per gallon car, that travels at .01 miles per hour. How efficient the vehicle is isn't the slightest bit useful to solve their complaint.

A high thrust to weight ratio when the weight is a couple of pounds isn't useful. What's useful is having a huge amount of thrust that's large enough to shove multiple tons of mass at high accelerations.


Yeah, no, that's nonsense. There's nothing preventing anyone from scaling up such systems. Remember, construction in space was already stipulated.


How large would such a construction need to be to accelerate 100 tons at 1g? Maybe someone could do the math for us. I assume it's on the order of dozens/hundreds of miles long per dimension and would be completely infeasible compared to just using an engine with high thrust to begin with.

[Edit]

Here's some rough math.

From wiki, assume a typical ion engine can produce 150mN of thrust from 4,000 W of power input.

Using a space station solar panel as an example of solar collection in space, each space station solar panel is 420 square meters in size and produces 31,000 W of power.

One space station solar panel would then provide (31,000 W / 4,000 W) * 150 mN = 1,162 mN, or .001162 N of force.

The force required to accelerate 100 tons at 1g requires 996,402 Newtons of force.

To generate that much force, you would then need 996,402 N / .001162 N = 857,488,812 space station solar panels worth of power.

As one space station solar panel is 420 square meters, then that requires 857,488,812 * 420 square meters = 360,145,301,040 square meters of solar panels.

Assuming square construction, each side would need to be 600,121 meters, or 373 miles long.

I assure you, just using high thrust engines makes infinitely more sense than building a pv-based ship scaled up so far that the ship's dimensions are nearly 400 miles long on each edge. At least for any time soon ..


Why would you need to accelerate anything at 1 g? That's a ridiculously high acceleration for getting to Mars. What matters more is the total delta-V, and if it can deliver it in time short compared to the transit time to Mars.

High Isp solar electric systems would not exploit the Oberth effect (likely they would start in high Earth orbit) so they don't have a high acceleration need from that.

If you want to accelerate to 15 km/s in 1 week, that's 2.5 milligees.


Accelerating/decelerating at 1G the entire journey would be the perfect scenario. Not only that would be the shortest travel time, but it would maintain gravity inside the ship all the time. If this is not the ultimate goal being worked towards, then we may as well just give up now. Nuclear is where it's at - it's the most efficient weight to power ratio generation known to man.


It's about as realistic as propelling the vehicle with unicorn farts. In particular, the kinds of nuclear propulsion being discussed in this thread could not do it. Solid core nuclear thermal rockets using hydrogen have an Isp of about 1000, so they could accelerate a vehicle at 1 gee for less than an hour.

The power/weight ratio of nuclear rockets actually sucks, compared to chemical rockets. Conveying heat through a solid/fluid interface is awkward and slow compared to just making it in situ by combustion.


NASA: “Here are some idea around Nuclear propulsion in space”

HN: > It's about as realistic as propelling the vehicle with unicorn farts.


The idea I was responding to there was not NASA's.


Another realistic and cost effective scenario would be Von Braun Wheel or O'Neill Cylinder stations this orbit: https://en.wikipedia.org/wiki/Mars_cycler


For crewed missions, wouldn't the low trust lead to very large transit times anyway, in a Mars mission scenario?


That depends on the power/mass ratio of the system.

Lower acceleration systems can also be used to preposition chemical fuels for use by crewed vehicles.


> Without having to be mechanically robust against wind and rain,

What about micro meteorids?


Very sparse, and in a properly designed PV cell a hole wouldn't matter.


TIL, always thought it would. I stand corrected!


I think nuclear electric propulsion is probably a viable option as well.


Indeed. Probably a combination of both nuclear-thermal and nuclear-electric or ion drive I think it's called otherwise. The nuclear thermal would provide initial boost, then ion drive can do continuous acceleration half way and then deceleration the other half way. That would get a ship to Mars in a fraction of time.


I live in one of those upscale glass tower estates in central London. There is fancy valet underground parking for some 500 cars. There are 2 slow chargers in total that don't even work 50% of the time. Owning an EV is a fucking joke for most people and I don't see that changing any time soon. Rebuilding entire world's power grid is gonna take forever.


His death, while tragic is unfortunately not surprising. Apparently he had massive addiction issues for a long time now. In 2019 he was in a coma from ruptured colon caused by opioid abuse and the doctors gave him 2% chance of making it, so he was living on borrowed time.


I'd say that if a company doesn't want to invest time in interviewing a candidate in person and instead use automated tests, then the candidate has every right to respond in the same manner by using AI to beat those automated tests.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: