Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This kinda change also has some different gears turning in my head. At $0.002 / build-minute, some of our large software integration tests would cost us around 15 - 20 cents. Some of our ansible integration tests would be 5 - 10 cents - and we run like 50 - 100 of those per day. Some deployments might cost us a cent or two.

Apples to oranges, naturally, but like this, our infra-jenkins master would pay for itself in hosting in a week of ansible integration testing compared to what GHA would cost. Sure, maintenance is a thing, but honestly, flinging java, docker and a few other things onto a build agent isn't the time-consuming part of maintaining CI infrastructure.

And I mean sure, everything is kinda janky on Jenkins, but everything falls into an expectable corridor of jank you get used to.





> Sure, maintenance is a thing, but honestly, flinging java, docker and a few other things onto a build agent isn't the time-consuming part of maintaining CI infrastructure.

Depending on your workplace, there's a whole extra layer of bureaucracy and compliance involved if you self-host things. I aggressively avoid managing any VMs for that reason alone.


Luckily, at work we are this layer of bureaucracy and compliance. I'm very much pushing the agenda and idea that managing a stateful, mutable linux VM is a complex skill on it's own and incurs toil that's both recurring and hard to automate. The best case to handle that is to place your use case into our config management and let us manage it.

Most modern development workflows should just pickup a host with some container engine and do their work in stateless containers with some external state mapped in, like package caches. It's much easier for both sides in a majority of cases.


> And I mean sure, everything is kinda janky on Jenkins, but everything falls into an expectable corridor of jank you get used to.

This is kinda where I am. No one really feels like they are selling a premium "just works" product. Its all jank. So why it the jank I chose at the price I chose?

At the moment I'm self hosting gitlab runners. Its jank. But it's free.


A while ago I set out to find a replacement for Jenkins. On prem with a comparable feature set. What I found out is that Jenkins is the worst apart from all the others.

> And I mean sure, everything is kinda janky on Jenkins, but everything falls into an expectable corridor of jank you get used to.

Self-hosting Jenkins on an EC2 instance is probably going to result in a _better_ experience at this point. Github Cache is barely better than just downloading assets directly, and with Jenkins you can trivially use a local disk for caching.

Or if you're feeling fancy and want more isolation, host a local RustFS installation and use S3 caching in your favorite tools.


Self-hosting on a host whose data actually persists is an even better experience, as it removes a lot of the tedium and workarounds such as extracting/down-/up-loading caches and so on. Get another host for redundancy and call it a day.

Hardware is getting cheaper and cheaper, but the fear-mongering around running a Linux machine has successfully prevented most businesses from reaping those cost reductions.


I repurposed old M1/M4 Mac Mini's at my workplace into GitHub action runners. Works like a charm, and made our workflows simpler and faster. Persisting the working directory between runs was a big performance boost.

> Hardware is getting cheaper and cheaper

Unfortunately not anymore and not in the foreseen future if we don't see some AI investment corrections.


Complete persistence has its downsides, as you can start getting "path dependency". E.g. a build succeeds only because some images were pre-cached by a previous build.

But having an _option_ to not download everything every time is great. You can add a periodic cache flushing, after all.


Last place I worked had long running end to end tests that would take 30 minutes on GHA (compared to maybe 5 locally) on every PR. This is going to make that a very expensive endeavour

We host a fair bit of Terraform code in a repos on GitHub, including the project that bootstraps and manages our GH org’s config: permissions, repos, etc.

Hilariously, the official Terraform provider for GitHub is full of N+1 API call patterns — aka exponential scaling hotspots — so even generating a plan requires making a separate (remote, rate-limited) API call to check things like the branch protection status of every “main” branch, every action and PR policy, etc. As of today it takes roughly 30 minutes to do a full plan, which has to run as part of CI to make sure the pushed TF code is valid.

With this change, we’ll be paying once to host our projects and again for the privilege of running our own code on our own machines when we push changes…and the bill will continue to grow exponentially b/c the speed of their API serves to set an artificial lower bound on the runtime of our basic tests.

(To be fair, “slow” and “Terraform” often show up and leave parties at suspiciously similar times, and GitHub is far from the only SaaS vendor whose revenue goes up when their systems get slower.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: