Hacker Newsnew | past | comments | ask | show | jobs | submit | mekoka's favoriteslogin

Yes, I save an incredible amount of time. I suspect I’m likely 5-10x more productive, though it depends exactly what I’m working on. Most of the issues that you cite can be solved, though it requires you to rewire the programming part of your brain to work with this new paradigm.

To be honest, I don’t really have a problem with chunking my tasks. The reason I don’t is because I don’t really think about it that way. I care a lot more about chunks and AI could reasonably validate. Instead of thinking “what’s the biggest chunk I could reasonably ask AI to solve” I think “what’s the biggest piece I could ask an AI to do that I can write a script to easily validate once it’s done?” Allowing the AI to validate its own work means you never have to worry about chunking again. (OK, that's a slight hyperbole, but the validation is most of my concern, and a secondary concern is that I try not to let it go for more than 1000 lines.)

For instance, take the example of an AI rewriting an API call to support a new db library you are migrating to. In this case, it’s easy to write a test case for the AI. Just run a bunch of cURLs on the existing endpoint that exercise the existing behavior (surely you already have these because you’re working in a code base that’s well tested, right? right?!?), and then make a script that verifies that the result of those cURLs has not changed. Now, instruct the AI to ensure it runs that script and doesn’t stop until the results are character for character identical. That will almost always get you something working.

Obviously the tactics change based on what you are working on. In frontend code, for example, I use a lot of Playwright. You get the idea.

As for code legibility, I tend to solve that by telling the AI to focus particularly on clean interfaces, and being OK with the internals of those interfaces be vibecoded and a little messy, so long as the external interface is crisp and well-tested. This is another very long discussion, and for the non-vibe-code-pilled (sorry), it probably sounds insane, and I feel it's easy to lose one's audience on such a polarizing topic, so I'll keep it brief. In short, one real key thing to understand about AI is that it makes the cost of writing unit tests and e2e tests drop significantly, and I find this (along with remaining disciplined and having crisp interfaces) to be an excellent tool in the fight against the increased code complexity that AI tools bring. So, in short, I deal with legibility by having a few really really clean interfaces/APIs that are extremely readable, and then testing them like crazy.

EDIT

There is a dead comment that I can't respond to that claims that I am not a reliable narrator because I have no A/B test. Behold, though: I am the AI-hater's nightmare, because I do have a good A/B test! I have a website that sees a decent amount of traffic (https://chipscompo.com/). Over the last few years, I have tried a few times to modernize and redesign the website, but these attempts have always failed because the website is pretty big (~50k loc) and I haven't been able to fit it in a single week of PTO.

This Thanksgiving, I took another crack at it with Claude Code, and not only did I finish an entire redesign (basically touched every line of frontend code), but I also got in a bunch of other new features, too, like a forgot password feature, and a suite of moderation tools. I then IaC'd the whole thing with Terraform, something I only dreamed about doing before AI! Then I bumped React a few majors versions, bumped TS about 10 years, etc, all with the help of AI. The new site is live and everyone seems to like it (well, they haven't left yet...).

If anything, this is actually an unfair comparison, because it was more work for the AI than it was for me when I tried a few years ago, because because my dependencies became more and more out of date as the years went on! This was actually a pain for AI, but I eventually managed to solve it.


Well, this is (one of my) areas so here goes. DSLs are a concept, not an implementation. As implemented they can vary from chained procedure calls to actual sub languages with lexers and parsers (and I tend to consider the latter to be 'proper' DSLs, but that's just my view).

To have a 'proper' DSL I reckon you need two things, and understanding that a thing can and should be broken out into its own sublanguage, and the ability to do so. The first takes a certain kind of nouse, or common sense. The latter requires knowing how to construct a parser properly and some knowledge of language design.

Knowing how to write a parser is not particularly complex but as the industry is driven by requirements more of knowing 'big data' frameworks rather than stuff that is often more useful, well, that's what you get, and that includes people who try to parse XML with regular expressions (check out this classic answer <https://stackoverflow.com/questions/1732348/regex-match-open...> Edit: if you haven't seen this check it out cos it's brilliant).

I think this reflects the fundamental problem in software development of the market's not knowing what's actually needed to solve real business problems.

++++

Edit, some reading material

https://www.amazon.co.uk/Language-Implementation-Patterns-Do...

https://www.amazon.co.uk/Definitive-ANTLR-Reference-Domain-S...

https://www.amazon.co.uk/yacc-Nutshell-Handbook-Doug-Brown/d...

They're all worth investing the time in.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: