> Parsing a known HTML structure, transforming a table, running a financial simulation.
Transforming an arbitrary table is still hard, especially a table on a webpage or in a document. Sometimes I even struggle to find the right library. The effort does not seem worth it for one-off need of such transformation too. LLM can be a great tool for doing the tasks.
Maybe we can define what "mainstream" means? Maybe this is too anecdotal, but my personal experience is that most of the engineers are tweakers. They love building stuff and are good at it, but they simply are not into math-like rigorous thinking. Heck, it's so hard to even motivate them to use basic math like queuing theory and stats to help with their day-to-day work. I highly doubt that they would spend time picking up formal verification despite the help of AI
I'm not sure if people are playing innocent or are really this naive. AI can generate apps that are similar to what people have built multiple times. But it's still a far cry for an AI to generate a brand-new system that has new similar predecessors that the AI has seen before. Besides, a service as sophisticated as Spotify has at least thousands of design points that may require thousands of pages to lay out the detailed spec. Yet we expect that a person magically builds the app with a few paragraphs of prompts? Of course, it's possible for someone to use AI as a helper to generate the the code incrementally, but I'd assume that's not what the author mean by "AI-Generated Apps", correct? If that is exactly what the author meant, the real question is why: what's the incentive behind the probably multi-personal-year effort to replicate an existing system?
I'd even go one step further: it does not have to be enforceable at all. This has to do with teen's psychology. For whatever reason, kids just fight their parents but listen to their schools and government a lot more. Of course, there are exceptions, but I'm talking about trend. The kids in my school district were generally angry towards their parents when they couldn't get a smartphone when their peers did. However, when my school district introduced the strict ban of electronic devices in school, the kids quieted down and even bought the same reasons that their parents were saying: attention is the most precious assets one should cherish. Kids complained that the problem sets by RSM (Russian School of Mathematics) are too hard and unnecessary (they are not by the standard of any Asian or East European country), yet they stopped complaining when the school teacher ramped up the difficulty of the homework.
So, when the government issues this ban, the kids would listen to their parents a lot more easily.
Absolutely this. We have limits in place for usage of a bunch of this sort of stuff, from not at all to up to an hour, and we'd be constantly tested and pushed on these limits. Constantly. "But my friends are..." is the usual start to it.
Government says you can't chat with just anyone in Roblox, and suddenly it's accepted that this is just what it is. Not only that, but limits and rules on how much and when you can watch YouTube and the like are also suddenly more acceptable.
So far what my kids are saying is that this is broadly true across their peer groups. The exceptions are just that, exceptions. The peer pressure to be in on it all is lessened. And in turn, that means less push-back on boundaries set by us, because it's less of a big deal.
(And I face less of a dilemma of how much to allow to balance out the harm of not being part of the zeitgeist vs. the harm of short form, mega-corporation curated content).
So many people are looking at this from a technical stand point and how water tight or perfect its going to be.
But there is a large psychological part of this that helps parents and I know that part of it is what a number of parents I've spoken to like about it.
Its not just about the current generation, but the next wave of kids who have grown up under these laws, the psychology of it will have changed.
This also works with other things such as alcohol and (old school) smoking (neither of which has watertight control, but the control is still very effective).
> continuously trains their replacement, and
> generally makes themselves redundant.
This is the advice we get from many business books and leadership advices. However, I see more often than not leaders in a company do the opposite to keep their power. There is a missing link to making the good advice actionable: what can the leader do to keep becoming more valuable while making themselves "redundant".
I suspect that Clickhouse will go down the same path. They already changed their roadmap a bit two years ago[1], and had good reasons: if the open sourced version does too well, it will compete with their cloud business.
> That’s not to say that there aren’t people within those companies who care about employee development, but the system isn’t set up for that to be the companies’ top priority.
There has been a cultural shift too. I don't know when it got started, but at least employees in the tech companies started to get more and more obsessed with promotions. The so-called career development is nothing but a coded phrase for getting promoted. Managers use promotion as a tool to retain talent and to expand their territories. Companies adopted to this culture too. As a result, people development increasingly became a lip service.
It's really handy for switching between projects that are on different Java versions, plus tools like IntelliJ pick up on the correct version via the SDKMAN! configuration as well.
OK but again what's the use case for that? Can you not just use a new version of Java for all your projects, at most setting -source/-target in your project configuration? Certainly in the old days it was always backwards compatible, at least enough that you could develop under a current version and maybe just have a CI job to check that your code still worked on old versions.
In a large organization with hundreds of business-critical Java applications you can bring them all up to one version at once.
It’s quite normal to have multiple versions of JDK being used by different applications
It's not only normal, it is completely to be expected. Even if you have only one project, there will come a time when one branch will be used to test the jump from 17 to 24 or something like that, so you'll work on that, but also switch back to the master branch when a colleague needs some help there.
sdk use java xxx
And done. A new LTS is released? Just sdk install it. Set the one you use most as default, and focus on the project instead of managing JDKs.
Oh, and very occasionally you'll actually get hit by a bug within one Java major version (like when they removed historic offsets from the timezone database included in the JVM, that was fun). At that point being able to switch between minor versions easily is quite nice.
> there will come a time when one branch will be used to test the jump from 17 to 24 or something like that, so you'll work on that, but also switch back to the master branch when a colleague needs some help there.
But can you not just install 24 on your dev box and use that to work on either branch, maybe with -source/-target arguments? It never used to be a problem to develop with a newer JVM even if it was an older project.
Note: Java compiler versions from 9 onwards have lost the ability to -target 1.5 and earlier.
Sometimes you still need Java 8 to compile for super old programs — think decades old IoT devices too small to handle newer JVMs that you still need to do the occasional minor update for.
But really sdkman is just nice to be able to quickly install and quickly switch jvms without worrying about any opinions the os package manager may have. If I want an old jre8, do I need to fuss around with finding the right package repo for my arch etc, or should I just use sdkman and be done with it.
Ideally, yes. In the real world? Nope. The longer you work one some project, the bigger the chance you will run into some edge case determined by the major version of the JDK. It just happens.
Even if you do all developing on the latest LTS, you will want to be able to switch to whatever version is running on the dev or prod application servers to replicate a bug as closely as possible.
By the way, you are ignoring the case I mentioned where a JDK bug happened between one minor version and the next.
> Even if you do all developing on the latest LTS, you will want to be able to switch to whatever version is running on the dev or prod application servers to replicate a bug as closely as possible.
Occasionally, sure. But is it really frequent enough to worry about?
> By the way, you are ignoring the case I mentioned where a JDK bug happened between one minor version and the next.
I am, because I don't see why it's a case you'd worry about. Just install the version without the bug.
I mean sure, I can see some minor advantages to making it easy to change JDK versions. But for how often you want to do that, it really doesn't seem worth the overhead of having another moving part in your setup.
Just install the version without the bug? Have you never developed software in a company?
Sometimes the JDK version you are targetting cannot be changed at that time, for a variety of reasons, most beyond your control.
Sometimes the JDK contains or triggers a bug you need to workaround and test in various minor versions.
Sometimes you need to switch to the exact minor version used in production.
Often you need to switch JDKs even within a single project, more often with several projects.
In the years that I've used SDKMan the number of times I invoked it to switch JDK versions in a terminal was more than once on hundreds of days (along with hundreds of days where whatever I set it to was fine for weeks on end). All painless, quick, and easy. Why wouldn't anyone involved in developing Java in a corporate setting make life easier on themselves? Those are not 'minor advantages', those are major time and mental overhead savers. It's a trivial tool to install and maintain too with almost no overhead for me to worry about. And if it breaks? Then the last version I configured will just keep working, and I can spend maybe half an hour to set up an alternative. That hasn't happened yet, so for a tool I've been using for a decade or so, that's pretty good.
I've hit JVM bugs in my professional career, sure. I just don't see the scenario where you'd need to be switching back and forth more than occasionally.
If you're running x.0.3 in production, you'd run x.0.3 locally. If there's a JVM bug that your application hits on x.0.3, either it's a showstopper in which case you'll find a way to upgrade/fix production quickly, or it's something you can tolerate, in which case you can tolerate it in local dev too. If you decide it's time to upgrade to x.0.4, you'd upgrade to x.0.4 locally, test a bit, then upgrade to x.0.4 in production. What's the scenario where you need to keep switching between x.0.3 and x.0.4 locally? You upgraded application Y to x.0.4, then discovered that application Z hits a showstopper bug and needs to stay on x.0.3? That's not a situation that you let fester for months, you either fix or work around the bug pretty quickly, or you decide that x.0.4 is too buggy and put everything back to x.0.3, and for that short period sure theoretically you would want to develop application Y under x.0.4 but the risk of developing it under x.0.3 is pretty damn small.
I get the argument that the cost is pretty low. It's just this is addressing something that really doesn't feel like a problem for me, and something that I don't think should be acceptable for it to be a problem. The JVM always used to be something you could upgrade almost fearlessly, and I think there was a lot of value in that.
I've been lucky enough that the large organisations I worked for generally had policies and enforcement to ensure that all applications were kept current. It's more initial effort but it pays dividends.
But even if you don't have that, most people work on at most a handful of apps at a time, and again I would defer checking against a specific version to CI most of the time rather than switching around which JVM I'm using locally, unless your code was very tightly coupled to JVM internals somehow.
What is the definition of "soaring"? The charts in the article showed that the percentage of the companies that adopt AI for automation has increase 3X. At least 40% of the companies pay for GenAI, and at least 10% of the employees use GenAI daily. Combined with the fact that the companies like OpenAI and Anthronpic frequently run out of capacity, how is the AI use not soaring?
- If microsoft bundles copilot to their standard office product, you become a company that pays for AI even if you didn't opt in
- Accidentally tapping the AI mode on the Google search will count as an AI search. DDG doesn't even wait for you to tap and triggers an AI response. Still counts as AI use even if you didn't mean to use
- OpenAI, Google and Microsoft have been advertising heavily (usage will naturally go up)
- Scammers using GenAI to scam increases AI usage and GenAI is GREAT for scammers
- Using AI after a meeting to get a summary is nice but not to enough to make a visible impact in a company output. Most AI usages fall in this bucket
This tech was sold as a civilisation defining. Not GPT-X but the GPT that is out now. Tech that was "ready to join the workforce" while the reality is that these tools are not reliable in the sense he implied. They are not "workers" and won't change the output of your average company in any significant way.
Sweet talking investors is easy, but walking the talk is another thing altogether. Your average business has no interest or time in supervising a worker that at random times behaves unpredictably and doesn't learn not to make mistakes when told off.
Those two sets of facts can be true at the same time.
40% of companies and 10% of employees can be using AI daily, but just for a small amount of tasks, and that usage can be leveling off.
At the same time, AI can be so inefficient that servicing this small amount of usage is running providers out of capacity.
This is a bad combination because it points to the economic instability of the current system. There isn't enough value to drive higher usage and/or higher prices and even if there was, the current costs are exponentially higher.
There was a dip on the first chart in the article, it also sbows something like 9% of companies using it.
What I wonder is beyond "using" AI, is what value the companies are actually seeing. Revenue growth at both OpenAI and Anthropic are increasing rapidly at the moment, but it's not clear if individual companies are really growing their useage, or if it is everyone starting to try it out.
Personally, I have used it sparingly at work, as the lack of memory seems to make it quite difficult to use for most of my coding tasks. I see other people spending hours or even days trying to craft sub-agents and prompts, but not delivering much, if any, output above average. Any output that looks correct, but really isn't cause a number of headaches.
For the VC's, one issue is constant increase in compute. Currently it looks to me like every new release is only slightly better, but the compute and training costs increase at the same rate. The AI companies need the end users to need their product so much they can significantly increase the price to the end users. I think this is what they want to see in "adoption", such a high demand that they can see the future of increasing prices.
I don't want to be all "did you read the article?" since that's against guidelines, but the text of the article (the stuff in between the graphics and ads) is kind of about exactly that.
Adoption was widespread at first but seems to have hit a ceiling and stayed there for a while now. Meanwhile, there's been little evidence of major changes to net productivity or profitability where AI has been piloted. Nobody is pulling away with radical growth/efficiency for having adopted AI, and in fact the entire market of actual goods and services is mostly still just stagnating outside of the speculative investment being poured into AI itself.
Investment isn't just about making a bet on whether an company/industry will go up or down, but about making the right bet about how much it will do so over what period of time. The scale of AI investment over the last few years was making the bet that AI adoption would keep growing very very fast and would revolutionize the productivity and profitability of the firms that integrated it. That's not happening yet, which suggests the bet may have been too big or too fast, leaving a lot of investors in an increasingly uncomfortable position.
I get confused about the word "adoption". By adoption is it meant that a company tried to use AI, determined it useful and continues to use it. Just trying something out is not adoption in my mind. Companies try and abandon things all the time.
It has been my experience that technology has to perform significantly better than people do before it gets massively adopted. Self driving cars come to mind. Tesla has self driving that almost works everywhere but Waymo has self driving that really works in certain areas. Adoption rates for consumers has been much higher with Waymo (I was surrounded by 4 yesterday) and they are expanding rather rapidly. I have yet to see a self driving Tesla.
Companies are shoving AI into everything and making it intrusive into everyone's workflow. Thus they can show how "adoption" is increasing!
But adoption and engagement don't equal productive, useful results. In my experience it simply doesn't and the bottom is going to fall out on all these adoption metrics when people see the productivity gains aren't real.
The only place I've seen real utility is for coding. All other tasks, such as Gemini for document writing, produces something that's about 80% ok, and 20% errors and garbage. The work of going back through with a fine toothed comb to root out the garbage is actually more work and less productive than any simply writing the darn thing from scratch.
I fear that the future of AI driven productivity is going to push a mountain of shoddy work into the mainstream. Imagine if the loan documents for your new car had all the qualities of a spam email. It's going to be a nightmare for the administrative world to untangle what is real from the AI slop.
Honest question: In a city, we expect that the government does its job: catch the bad guys and prosecute them. Why is it not the same case when it comes to an online platform? The platforms, Roblox included, do invest millions a year to set up a safety team and to build moderate systems. The systems are not not perfect, and I doubt they will be, just like not all crimes can be resolved. So, why do we expect platforms to do job much better than the real justice systems, if not a perfect job to start with?
Transforming an arbitrary table is still hard, especially a table on a webpage or in a document. Sometimes I even struggle to find the right library. The effort does not seem worth it for one-off need of such transformation too. LLM can be a great tool for doing the tasks.
reply