Is it true that it's bad for learning new skills? My gut tells me it's useful as long as I don't use it to cheat the learning process and I mainly use it for things like follow up questions.
It is, it can be an enormous learning accelerator for new skills, for both adults and genuinely curious kids. The gap between low and high performancer will explode. I can tell you that if I had LLMs I would've finished schooling at least 25% quicker, while learning much more. When I say this on HN some are quick to point out the fallibility of LLMs, ignoring that the huge majority of human teachers are many times more fallible. Now this is a privileged place where many have been taught by what is indeed the global top 0.1% of teachers and professors, so it makes more sense that people would respond this way. Another source of these responses is simply fear.
In e.g. the US, it's a huge net negative because kids aren't probably taught these values and the required discipline. So the overwhelming majority does use it to cheat the learning process.
I can't tell you if this is the same inside e.g. China. I'm fairly sure it's not nearly as bad though as kids there derive much less benefit from cheating on homework/the learning process, as they're more singularly judged on standardized tests where AI is not available.
I don't get this line of thinking. Never in my life have I heard the reasoning "replacing effort is the problem" when talking about children who are able to afford 24/7 brilliant private tutors. Having access to that has always been seen as an enormous privilege.
Having an actual human who is a "brilliant private tutor" is an enormous privilege. A chatbot is not a brilliant private tutor. It is a private tutor, yes, but if it were human it would be guilty of malpractice. It hands out answers but not questions. A tutor's job is to cause the child to learn, to be able to answer similar questions. A standard chatbot's job is to give the child the answer, thus removing the need to learn. Learning can still happen, but only if the child forces it themselves.
That's not to say that a chatbot couldn't emulate a tutor. I don't know how successful it would be, but it seems like a promising idea. In actual practice, that is not how students are using them today. (And I'd bet that if you did have a tutor chatbot, that most students would learn much more about jailbreaking them to divulge answers than they would about the subject matter.)
As for this idea that replacing effort not being a problem, I suggest you do some research because that is everywhere. Talk to a teacher. Or a psychologist, where they call it "depth of processing" (which is a primary determinant of how much of something is incorporated, alongside frequency of exposure). Or just go to a gym and see how many people are getting stronger by paying 24/7 brilliant private weightlifters to do the lifting for them.
Regarding your concerns about tutor emulation, your argument seems to be students use chatbots as a way to cheat rather than as a tutor.
My pushback is its very easy to tell a chatbot to give you hints that lead to the answer and to get deeper understanding by asking follow up questions if that's what you want. Cheating vs putting in work has always been something students have to choose between though and I don't think AI is going to change the amount of students making each choice (or if it does it won't be by a huge percentage). The gap in skills between the groups will grow, but there will still be a group of people that became skilled because they valued education and a group that cheated and didn't learn anything.
> A standard chatbot's job is to give the child the answer, thus removing the need to learn.
An LLM's job is not to give the child the answer (implying "the answer to some homework/exam question"), it's to answer the question that was asked. A huge difference. If you ask it to ask a question, it will do so. Over the next 24 hours as of today, December 5th 2025, hundreds of thousands of people will write a prompt that includes exactly that - "ask me questions".
> Learning can still happen, but only if the child forces it themselves.
This is literally what my original comment said, although "forcing" it is pure negative of a framing; rather "learning can still happen, if the child wants to". See this:
>In e.g. the US, it's a huge net negative because kids aren't probably taught these values and the required discipline. So the overwhelming majority does use it to cheat the learning process.
I never claimed that replacing effort isn't necessarily a problem either, just that such a downside has never been brought up in the context of access to a brilliant tutor, yet suddenly an impossible-to-overcome issue when it comes to LLMs.
I learnt the most from bad teachers#, but only when motivated. I was forced to go away and really understand things rather than get a sufficient understanding from the teacher. I had to put much more effort in. Teachers don't replace effort, and I see no reasons LLMs will change that. What they do though is reduced the time to finding the relevant content, but I expect at some poorly defined cost.
# The truly good teachers were primarily motivation agents, providing enough content, but doing so in a way that meant I fully engaged.
I think what it comes down to, and where many people get confused, is separating the technology itself from how we use it. The technology itself is incredible for learning new skills, but at the same time it incentivizes people to not learn. Just because you have an LLM doesn't mean you can skip the hard parts of doing textbook exercises and thinking hard about what you are learning. It's a bit similar to passively watching youtube videos. You'd think that having all these amazing university lectures available on youtube makes people learn much faster, but in reality in makes people lazy because they believe they can passively sit there, watch a video, do nothing else, and expect that to replace a classroom education. That's not how humans learn. But it's not because youtube videos or LLMs are bad learning tools, it's because people use them as mental shortcut where they shouldn't.
I fully agree, but to be fair these chatbots hack our reward systems. They present a cost/benefit ratio where for much less effort than doing it ourselves we get a much better result than doing it ourselves (assuming this is a skill not yet learned). I think the analogy to calculators is a good one if you're careful with what you're considering: calculators did indeed make people worse at mental math, yet mental math can indeed be replaced with calculators for most people with no great loss. Chatbots are indeed making people worse at mental... well, everything. Thinking in general. I do not believe that thinking can be replaced with AI for most people with no great loss.
I found it useful for learning to write prose. There's nothing quite like instantaneous feedback when learning. The downside was that I hit the limit of the LLM's capabilities really quickly. They're just not that good at writing prose (overly flowery and often nonsensical).
LLMs were great for getting started though. If you've never tried writing before, then learning a few patterns goes a long way. ("He verbed, verbing a noun.")