I actually really like the idea of a Digital Services Provider Ombudsman, who you can go to if you feel like you've been wronged by a big tech corp. They have a "way in" that consumers potentially don't, and they have the capacity to levy fines in certain circumstances. I love this! What's preventing this from happening, other than no governmental pressure to make it happen? I might write to my MP...
> Running someone else's patch set of Arch is the easiest way to have a terrible Linux experience. Having a nice interface to lull people into believing what they are getting is a professional product and then handing them a fundamentally broken system, where some hobbyists have patched a proper Linux distro so bad, that you are not even allowed to ask for help on the Arch forum is down right devious and presents the worst of the Linux world.
Except, this isn't the experience for the majority of users moving to Cachy, Bazzite, Zorin, whatever. What they're getting is a fresh, usable experience specifically in the "flavor" they care about.
Linux, and especially Arch, has an image problem, and it's the reason, despite how good these base distros might be, that people aren't coming. It takes a clever bit of branding and a marginalisation of all the gatekeeping (just like you're trying to do right now) to let users finally think "actually, maybe this is something I can use".
>Except, this isn't the experience for the majority of users moving to Cachy, Bazzite, Zorin, whatever.
Yes, but it will be experience they inevitably will have once these differences will result in their OS being fundamentally broken and nobody being there to help them.
>It takes a clever bit of branding and a marginalisation of all the gatekeeping (just like you're trying to do right now) to let users finally think "actually, maybe this is something I can use".
Hilariously giving people a fundamentally broken OS, which they use based on superficial criteria is the best gatekeeper imaginable. Once the inevitable happens and their distro is totally trashed, they will never use Linux for anything again.
If you want people to have a good long term experience give them a well supported mainstream distro, instead of a fundamentally broken arch patchset.
>"actually, maybe this is something I can use".
Which is exactly the wrong thought. No, the fundamentally broken Arch derivative you are trying to use is much, much harder to use than Fedora.
But you won't get them to understand these points unless you're willing to fix the image problem and then invest in better branding. Telling people they're wrong doesn't sell things.
New users shouldn't understand these points. They just should be advised not to use any of these distros, they do not need to understand the reasons, besides that they are poorly supported projects and will break their OS.
You keep saying it’s fundamentally broken. That appears to be inconsistent with virtually all of the first hand accounts in this thread. You come across as intransigent.
Congrats to the team at Pipedream. We started using the product quite early on and now I would say it's an integral part of some of our systems. I don't know much about Workday but to me the connection between the two orgs, or gauging how they will complement one another, isn't super obvious. It makes me nervous. Tod, please don't let Workday erode what Pipedream has become. Please don't let them enshittify it. Please don't let them make confusing changes to the plans, in an attempt to extract more value from us, your customers. We're people, not spectres with wallets. Pipedream is good at what it does. Focus on being good and everyone will be happy.
In most companies I've been a part off, including multiple >$1B tech companies, the CTO's focus is not on the engineering. That's the job of a VP Engineering or some similar position.
CTO (which will sometimes have a "CTO office") is to work besides the engineering on investigating new technologies and ideas that are beyond what the engineering organization would have otherwise done on the day to day. They are also an authority on all technology in the company but are not in the engineering "chain of command".
That said this is not universal, there are organizations where the CTO does lead the engineering organization. I think that's sub-optimal because there is always going to be tension between the day to day and the broader scope and those should be different roles.
In a startup, it is more common for a CTO to lead engineering because there is not yet enough to justify having both a VP Eng and a CTO and perhaps most of the work is around figuring out technologies. But as the company grows it makes sense to separate those functions.
I've seen both. A CTO office that also leads engineering--typically via a direct report to the CTO--and an organization where the CTO is largely an external evangelist (typically with a small staff) while engineering is a separate organization--though hopefully aligned. The view here where CTO is also the head of day-to-day engineering operations and technical vision is more of a small company/startup thing. The two are often separated to at least some degree at larger operations.
This description is accurate to what I have seen and what I do. I'm a CTO of a >$1B tech company, and my roles is focused around the technology innovation, and that includes evaluating and prototyping new tech. In my particular case that role also includes the operation of our technology because that is very central to our business - and also extremely focused on high reliability.
When I was CTO of my startup I had far more direct engineering development work, but that is typical in the building stage.
As for the core of this post, the one thing I do agree with is the ability of the CTO to actually be technical. I write code all of the time, but not for our products. The goal is to remain both technically proficient but also focus that proficiency on leadership.
There is a big leap between them not being the sole person responsible for technical decisions and them not even necessarily having a seat at the table for technology direction. The former is understandable. Later - quite surprising.
I'm not sure what I wrote that's contrary to any of that? Maybe I shouldn't have used the word "probably"? There are a lot of people responsible for the technical direction of a large company of which the CTO is important but hardly the only one.
Ugh, there is just something so satisfying about developer cynicism. It gives me that warm, fuzzy feeling.
I basically agree with most of what the author is saying here, and I think that my feeling is that most developers are at least aware that they should resist technical self-pleasure in pursuit of making sure the business/product they're attached to is actually performing. Are there really people out there who still reach for Meta-scale by default? Who start with microservices?
> Are there really people out there who still reach for Meta-scale by default? Who start with microservices?
Anecdotally, the last three greenfield projects I was a part of, the Architects (distinct people in every case) began the project along the lines of "let us define the microservices to handle our domains".
Every one of those projects failed, in my opinion not primarily owing to bad technical decisions - but they surely didn't help either by making things harder to pivot, extend and change.
It kinda started with Clean Code. I remember some old colleagues walking around with the book in their hand and deleting ten year old comments in every commit they made: "You see, we don't need that anymore, because the code describes itself". It made a generation (generations?) of software developers think that all the architectural patterns were found now, we can finally do real engineering and just have to find the one that fits for the problem at hand! Everyone asked the SOLID principles during interviews, because that's how real engineers design! I think "cargo cult" was getting used at that time too to describe this phenomenon.
It was (is) bad. The worst part is they the majority of people pushing it haven’t even read Clean Code. They’ve read a blog post by a guy who read a blog post by a guy who skimmed the book.
I don't buy the idea that people mainly reach for microservices for scalability or "pleasure" reasons though.
I personally reach for it to outsource some problems by using off the shelf solutions. I don't want to reinvent the wheel. And if everyone else is doing it in a certain way I want to do it in the same way to try to stand on the shoulders of giants and not reinvent everything.
I needed to build an internal admin console, not super-scalable, just a handful of business users to start. The SQL database it would access was on-premises, but might move to the cloud in future. Authorized users needed single sign-on to their Azure-based active directory accounts for login. I wanted to do tracing of user requests with OpenTelemetry or something like.
At this point in my career, why wouldn't I reach for microservices to supply the endpoints that my frontend calls out to? Microservices are straightforward to implement with NodeJS (or any other language, for that matter.) I get very straightforward tracing and Azure SSO support in NodeJS. For my admin console, I figured I would need one backend-for-frontend microservice that the frontend would connect to and a domain service for each domain that needed to be represented (with only one domain to start). We picked server technologies and frameworks that could easily port to the cloud.
So two microservices to implement a secure admin console from scratch, is that too many? I guess I lack the imagination to do the project differently. I do enjoy the "API First" approach and the way it lets me engage meaningfully with the business folks to come up with a design before we write any code. I like how it's easy to unit/functional test with microservices, very tidy.
Perhaps what makes a lot/most of microservice development so gross is misguided architectural and deployment goals. Like, having a server/cluster per deployed service is insane. I deploy all of my services monolithically until a service has some unique security or scaling needs that require it to separate from the others.
Similarly, it seems common for microservices teams to keep multiple git repos, one for each service. Why?! Some strange separation-of-concerns/purity ideals. Code reuse, testing, pull requests, and atomic releases suffer needless friction unless everything is kept in a monorepo, as the OP implied.
Also, when teams build microservices in such a way that services must call other services completely misses the point of services - that's just creating a distributed monolith (slow!)
I made a rule on my team that the only service type that can call another service is aggregation services like my backend-for-frontend which could launch downstream calls in parallel and aggregate the results for the caller. This made the architecture very flat with the minimum number of network hops and with as much parallelism as possible so it would stay performant. Domain services owned their data sources, no drama with backend data.
I see a lot of distributed monolith drama and abuse of NoSQL data sources giving microservices a bad reputation.
Without meaning to sound flippant or dismissive, I think you're overthinking it. By the sounds of it, agents aren't offering what you say you need. What are _are_ offering is the boilerplate, the research, the planning etc. All the stuff that's ancillary. You could quite fairly say that it's in the pursuit of this stuff where details and ideas emerge and I would agree, but sometimes you don't need ideas. You need solutions which are run-of-the-mill and boring.
I'm well aware that LLMs are more than capable enough to successfully perform straightforward, boring tasks 90% of the time. The problem is that there's a small but significant enough portion of time where I think a problem is simple and straightforward, but it turns out not to be once you get into the weeds, and if I can't trust the tool to tell me if we're in the 90% problem or the 10% problem, then I have to carefully review everything.
I'm used to working with tools, such as SMT solvers, that may fail to perform a task, but they don't lie about their success or failure. Automation that doesn't either succeed or report a failure reliably is not really automation.
Again, I'm not saying that the work done by the LLM is useless, but the tradeoffs it requires make it dramatically different from how both tools and humans usually operate.
Also assumes that the bots coexist on the server. My first thought would just be to connect them like any other client, with compute of their own so the server doesn't even know it's a bot.
reply