Hacker Newsnew | past | comments | ask | show | jobs | submit | dazzawazza's commentslogin

I'm really enjoying some of the innovation in the BSD space at the moment.


BSD space has always been ahead in some ways. They can move more freely forward.


(All?) the BSDs are a kernel and userland as a single release. They don't have to worry about not breaking some program that someone might have compiled 5 years ago.


They still try not to break things, because you might be running a new kernel with old userland (this is part of the typical upgrade process), or you may have 3rd party programs that were compiled some time ago. I'm only familiar with FreeBSD; statically linked programs are usually good because old syscalls are typically maintained for a long time, dynamically linked programs will tend to be ok if you install the compat libraries.

There's been errors and exceptions of course.

I think the real benefit is they don't have to worry about people trying to run new userland with old kernels; that's explicitly not supported and stuff in base usually doesn't worry too much about it. So if netstat needs a new kernel interface to be faster, the netstat binary in the new release may not work with old kernels, c'est la vie.


BSDs do not break backwards compatibility (at least from what I know from FreeBSD and NetBSD). You _can_ disable backwards compatibility via kernel-level options and by not installing certain distribution sets -- but the default is to remain backwards compatible.

What being a kernel+userland helps with is in implementing features end-to-end: if you have to change how a certain aspect of the network configuration happens, you can do the kernel changes and the userlevel changes in unison. Which explains why you have a unique set of network tools, great documentation, a simple build system that can upgrade the machine...


I recently migrated one of my FreeBSD servers to hetzner and it was a breeze. The only wrinkle was that, until you've completed a billing cycle, you can't host an email server as the required ports are blocked.

For me this was fine and I understand why they do this but it wasn't clear to me at the start.


Note that if your credit card expires, Hetzner will just turn off networking to your stuff until you fix it. No warnings given, and you'll find out when your alerting/customers/staff contact you to let you know something is wrong.

Guess how I found out... :(


You can pre-charge your account to give yourself a buffer in case your payment method doesn't work for whatever reason, although it requires a bank transfer.


While I guess that's useful, when my CC expired other places sent reminders/warnings which is the standard business approach.

It was only Hetzner which didn't, and instead they turned off networking to all of our stuff (dedicated servers, some VMs, etc) with no warning. Then their support team screwed us around for a while as well.

I'm about as unimpressed with them as it's possible to get. :(


The standard business approach is to update card details before the card expires, instead of relying on service providers sending warnings when payments are already failing.


Life gets in the way, some things fall through the cracks. A good business will send warnings about expiring cards.


Sure. In this particular case it was "expired" early due to some random place guessing the number and the bank rightfully taking precautions.

I updated all of the places I remembered, but missed Hetzner and a few others. Only Hetzner didn't have their shit together enough to gracefully notify us. Or account support staff who were at all interested in assisting.


It's curious that you use expired to mean "payments were rejected before the expiration was reached".


Sure. Wrote that late at night when overly tired. There's nothing nefarious going on here.

I'm still not exactly sure of the correct terminology for the situation. I'd noticed two suspicious transactions on the credit card, rang up my bank about it, and we agreed they'd better generate a new credit card and kill the existing one.

I then contacted all of the places that I knew of to update them with the new credit card details. I missed Hetzner and (from rough memory) two others. Only Hetzner wasn't able to handle it correctly.


There are multiple warning levels and you should get email notifications. I happened to overlook those as well and also only noticed it when they turned off networking. However, that was two weeks after the invoice due and it got unblocked in seconds after the payment went through.


I assumed I'd missed warnings as well, but when I actually checked (after fixing the issue, because priorities) there were indeed no warning emails/sms/etc at all sent.

Literally, no kind of notification, warnings, anything at all. Due to this, and their support team being incredibly unhelpful during the outage, they're now on my personal blacklist for literally everything.

So instead of strongly recommended them, which I used to do, we've migrated 95% of everything off Hetzner and I'm hanging out for it to be 100%. And I warn others away from them at every opportunity. Like here. :)

We will not be returning to Hetzner. Ever.


I was using pre-charged account until waiting for a new bank account and credit card. I wasn't even hosting any VPS for a month or so, but Hetzner closed my account with no explanation, never got my money back. F*ck them, thieves.


Well that is indeed crazy, can totally understand your sentiment.


You can ask and explain to them what kind of traffic you'll have. I've shown them the project I'm migrating, and they've opened ports for me right from the start.



yep, it's the permanent nature of the recording put in to the public sphere that is the game changer for me.

I accept I am visible in public to all who share a space but I do not accept that the ephemeral nature of my existence in that space should be violated.


I agree with you but somethings are missing from the BP experience.

I've implemented many VPLs in video games and I've used Blueprints extensively. I've probably made all the classic "mistakes" designing VPLs, many of which are mentioned in this article. I don't think I am very good at designing VPLs despite having done it on and off for 30 years.

I think BPs are the best example of a VPL out there at the moment. Certainly in video games. However it still falls short of the ideals of VPLs.

Essentially BPs trick people in to being programmers. They still have to understand the "ways" of programming (for loops, if then, etc). With a little context switch and training they would probably be more productive with a text based interface. So the abstraction BPs provide is very limited.

BPs are a general programming tool used for materials, game play logic, animation trees etc. Because of this there are few, if any, high level abstractions that relieve the user of the burden of programming. Don't get me wrong, this is hard, very hard, so I am not calling anyone out. It requires sitting down with a non-technical person and really understanding how they think and what they need. Turning that in to something that isn't node + wire is hard. The fact that the industry has created technical artists to fill the void says to me that BPs are failing to a certain extent (and TAs can just use text based programming and do in many studios).

Overall I agree that the field of VPLs is stuck at a local minima and the 10x productivity improvement for non programmers is still illusive.


Isn't this desire nonsensical? A visual programming language will indeed still have programming. You can't get away from the core principles of logic, event flow, etc. and that was never the point, imo. As I said, VPLs provide other advantages than that.

If you're looking for a gui configuration tool instead of a VPL... those exist too. They exist by being declarative data editors.

StateTree editors, Animation Blueprints, and discrete state machines are some possible examples of visual editors that change app functionality that strip away core programming concepts for simplicity. There's plenty of these floating around but I feel like they "don't count" while simultaneously being exactly what is asked for.


There's a correspondence between where node-and-wires works best and where the problem statement prefers something like a circuit diagram because it's synchronization-heavy. Games need a lot of this, and so do audio apps. The problem is that as you do more compute on the workload and need more "plain old synchronous algorithms" to process it, that holds true relatively less often, so it's a solution disproportionate to what production tools actually need.

I did a survey of the popular VPL environments last year and found Blender's implementation notable for what it didn't do. It exposes node-and-wire UI elements and the scripting for them in Python; all further details are delegated to each subsystem. So shaders, video, etc. have no relationship within their programming model, just UI similarity.

Add to that the aspect of - why are programming languages always going for plaintext representation - and it's because of the tooling ecosystem. We have a huge investment in text editing and text processing, and in typing on standardized keyboard layouts, on speech-to-text and vice versa, and lately, on text generation language models. We have made that stuff really cheap and fast for everyone, in the same way that when Blender did node-and-wire, they elected to make the paradigm as a whole cheaper to reuse in a customized way, versus investing heavily in one particular implementation and trying to extend it to many scenarios.

I think part of the issue is that we design VPLs without taking any interest in draftsmanship - the traditional core skillset for working with "visual language". Instead we use mouse pointers and touchpads and treat it as a data entry problem that has to be structured right away instead of allowing sloppy syntax.


UE BPs do have a few abstractions available-- macros, interfaces, classes, data-only. [1]

I think the greatest challenge of VPLs is how they don't have an inherent linear structure, so reading and understanding them for modification requires a nonlinear scan of a node soup. This puts them in the class of tools that struggle to make large programs readable, like Bash or Forth.

https://dev.epicgames.com/documentation/en-us/unreal-engine/...


Maybe the true abstraction layer is the BP market on fab.


I think you are correct. I work in game dev. Almost all code is in C/C++ (with some in Python and C#).

LLMs are nothing more than rubber ducking in game dev. The code they generate is often useful as a starting point or to lighten the mood because it's so bad you get a laugh. Beyond that it's broadly useless.

I put this down to the relatively small number of people who work in game dev resulting in relatively small number of blogs from which to "learn" game dev.

Game Dev is a conservative industry with a lot of magic sauce hidden inside companies for VERY good reasons.


yep, in London it's pretty normal to be driving at 30 and be overtaken by a teenager on a scooter with no insurance, no protective equipment and not a care in the world. They can kill themselves for all I care but they can easily kill a pedestrian or cyclist which doesn't seem reasonable to me.


killing themselves is not the worst thing than can happen to them, and not so likely. Other possibilities involve long years of suffering in bed and on a wheelchair.

But the e-bikes are more heavy and will also likely make other people suffer.


> People should use the VCS that's appropriate for their project rather than insist on git everywhere.

A lot of people don't seem to realise this. I work in game dev and SVN or Perforce are far far better than Git for source control in this space.

In AA game dev a checkout (not the complete history, not the source art files) can easily get to 300GB of binary data. This is really pushing Subversion to it's limits.

In AAA gamedev you are looking at a full checkout of the latest assets (not the complete history, not the source art files) of at least 1TB and 2TB is becoming more and more common. The whole repo can easily come in at 100 TB. At this scale Perforce is really the only game in town (and they know this and charge through the nose for it).

In the movie industry you can multiply AAA gamedev by ~10.

Git has no hope of working at this scale as much as I'd like it to.


Perforce gets the job done but it's a major reason why build tooling is worse in games.

Github/gitlab is miles ahead of anything you can get with Perforce. People are not just pushing for git because they ux of it, they're pushing git so they can use the ecosystem.


What do you mean “build tooling is worse in games”? I’ve worked in games for 20 years and used both perforce and got extensively across small, medium, and large projects.

There’s nothing technically worse about perforce with regard to CI/CD, if that’s what you’re talking about, except that of course there are more options for git where you just click a button and enter your credentials and away you go. But that’s more a function of the fact that there aren’t as many options for perforce hosting and companies are more likely to host it internally themselves.

If companies are using perforce but aren’t running any CI/CD that’s because they’re lazy/small/don’t see the value.


You said it yourself, there are less options and that's worse. Swarm is a bad joke. Nothing is as integrated as gitlab/GitHub.


Right. Sure I agree with that. You have to do a lot more yourself. I just don't think it's that big a deal though but that's probably just me. Someone running Perforce has probably set that up themselves, and so they vaguely know what they're doing. So if they care about CI/CD they probably have the ability to set it up themselves. Personally I've used CruiseControl.NET, Jenkins, Buildbot, and custom in-house software and the first three support Perforce out of the box. I also don't mind the classic Swarm UI (I don't like the new UI) although I admit I do prefer the GitHub and GitLab UI!


There is the old adage that every bit of friction increases the chance something won't be done.

Swarm feels like a minimal effort to me and just isn't as frictionless as GitHub. Id rather have Swarm than not, but it's not great.


I've been thinking of using git filter to split the huge asset files (that are just internally a collection of assets bundler to 200M-1GB files) into smaller ones. That way when artist modifies one sub-asset in a huge file only the small change is recorded in history. There is an example filter for doing this with zip files.

The above should work. But does git support multiple filters for a file? For example first the above asset split filter and then store the files in LFS which is another filter.


I mean it might work but you'll still get pull time-outs constantly with LFS. It's expensive to wait two or three days before you can start working on a project. Go way for two weeks, it will be a day before you can "pulled" up to date.

I hope this "new" system works but I think Perforce is safe for now.


When I was in my 20s I realized I had lost the thirst signal. I never felt thirsty. I guessed this was because I lived a comfortable life and I had lost this signal in the noise of modern life.

So I set about deliberately retraining myself. I stopped drinking everything but water (and beer, because life) I'd exercise (and sweat) and then drink water. I retrained my body/mind to savour the pleasantness of drinking water when dehydrated and after a year of conscious effort I more or less recovered the sense of "thirst" and would pre-emptively desire drinking water.

We are pretty simple machines.


Other languages internalise that complexity. C leaves it bare, human scale and understandable.

The speed at which you can write great C code often far outstrips other languages which are applicable to the problem domain.

All languages are a compromise, there are no silver bullets.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: