> The friction isn’t just about quality—it’s about what the ubiquity of these tools signals.
Unless they are being ironic, using an AI accent with a statement like that for an article talking about the backlash to lazy AI use is an interesting choice.
It could have been human written (I have noticed that people that use them all the time start to talk like them), but the "its not just x — its y" format is the hallmark of mediocre articles being written / edited by AI.
this. marketing speak appears much more frequently in online text, which is what AI is trained on, than it does in normal everyday human speech that AI isn't able to capture and train on en masse yet.
It’s not universal - but it’s a compelling rhetorical device /s
It just sounds like slop as it’s everywhere now. The pattern invites questions on the authenticity of the writer, and whether they’ve fallen victim to AI hallucinations and sycophant. I can quickly become offended when someone asks me to read their ChatGPT output without disclosing it was gpt output.
Now when AI learns how to use parallelism I will be forced to learn a new style of writing to maintain credibility with the reader /s
I hate this. Writing skills used to be a way to show you're paying attention to detail and making an effort. Now everyone thinks I'm cheesing it out with AI.
I also have a tougher time judging the reliability of others because you can get grammatically perfect, well organized emails from people that are incompetent. AI has significantly increased the signal to noise ratio for me.
Yeah, but the stuff people seem to obsess about are just bits of neat typography like dashes and rhetoric flourishes that should, or used to, signify good writing and worked for a reason. The AI just overuses them, it’s not that they’re bad per se. I suppose it’s a treadmill like anything else that gets too popular. We have to find something new to do the same thing (if possible!). And that sucks.
People cant verbalize good and bad writing. Being able to see it and being able to diagnoze are two different things.
Fact is, AI writing is just bad. It checks all the elementary school writing boxes, but fails in a sense that it is a bad, overly verbose, just subtly but meaningfully incorrect text. People see that, cant put the issue into words and then look for other signs.
Yes, ai is bad in a way someone who learns some rules about writing produces bad texts. And when human writes the same way, it is still bad.
You are correct. There's just a lot of societal pressure to know what good writing is, even amongst people who don't read outside of social media. They don't want to appear stupid, so they say dashes are "AI" because everybody does.
I naturally wrote "it's not just X, it's Y" long before November 2022 ChatGPT.
Probably because I picked up on it from many people.
It's a common rhetorical template of a parallel form where the "X" is re-stating the obvious surface-level thing and then adding the "Y" that's not as obvious.
E.g. examples of regular people writing that rhetorical device on HN for 15+ years that wasn't in the context of advertising gadgets:
So AI-slop writes like that because a lot of us humans wrote like that and it copies the style. Today is the first time I've learned that the "It's not X, it's Y" really irritates many readers. Personally, I've always found it helpful when it reveals a "Y" that's non-obvious.
2) Most of those, while they had the two statements, the statements were not in succession.
There are maybe 4 unique examples in the search over the past 15 years, which is why it is very telling when there is an explosion of the pattern seen today, and that is most likely due to LLMs.
I was responding in particular to the "you write like a late night kitchen gizmo ad?" ... which would be a speech pattern people hear. In the audio case, it doesn't matter what punctuation symbol separates the "it isn't/it's" pattern because the comma or em dash would be invisible.
>There are maybe 4 unique examples in the search over the past 15 years,
No, (1) the Algolia search engine HN uses is not exhaustive and always returns incomplete results, and (2) I couldn't construct a regex to capture all occurrences. It didn't capture the dozens of times I used it before 2022.
More pre-2022 examples that match the "it isn't/it's" pattern that the blog author is complaining about :
The same gp mentioned that it's also common in "ad copy". That's also true with the famous Navy's "It's not just a job. It's an adventure.". E.g. 1981 tv commercial: https://www.youtube.com/watch?v=Tc9g2tagYms
That's a slogan people heard rather than read with an em dash. LLM engines picked up on a common phrasing used for decades.
I understand that there are multiple people in this conversation, but you are attempting to pick and choose points to discuss at the expense of your own internal consistency. If you were responding to "which would be a speech pattern people hear," why did you only quote written examples from the HN search and not provide video or audio clips?
>why did you only quote written examples from the HN search and not provide video or audio clips?
At the risk of stating the obvious, highlighting the HN _texts_ demonstrates in a very literal way the "write like" fragment in gp's question, "You write like a late night kitchen gizmo ad?. The other fragment was the "late night kitchen gizmo ad" which is the audio comparison. The gp was making that comparison between the writing style and the speech style when asking the question. (https://news.ycombinator.com/item?id=46165248)
Providing audio links would not show the "writes like". The gp (and you) already know what the "It isn't/It's" audio pattern sounds like. It's the written text the gp was wondering about.
The point is people really did write text like that (no em dashes required) before ChatGPT existed.
EDIT reply to: >He just said that it is traditionally associated with late-night ads, and that the explosion in use of the phrase (especially with the em-dash)
Actually, the gp (0_____0) I was responding to didn't mention the em dash in either of the 2 comments. Gp used a comma instead of em dash. Gp only mentioned the comparison to ad copy. The em dash wasn't relevant in the subthread we're in. That's something extra you brought up that's not related to gp's specific question.
EDIT reply to: >Quick HN tip: It is usually better to reply to a post instead of editing the original post.
I agree but the "reply" option was not available. This is a "cool down" mechanism HN uses to discourage flame wars. I don't know if it's 30 minutes or what the value is before the reply link shows up. It was just easier to reply in my post rather than wait an indeterminate time.
>This statement is incorrect, as the original post mentioned, "'it's not just x — it's y' format is the hallmark
Yes but that's not the ggp (ceroxylon) I was responding to. Instead, I was responding t gp (0_____0)'s question and the 2 times the writing was compared to ad copy with no mention of em dashes. Sorry for not making that clear.
>Showing fewer than a dozen uses of the phrase
Again, there are thousands of examples but the Algolia search engine will not show all of them.
Quick HN tip: It is usually better to reply to a post instead of editing the original post.
>Actually, the GP (0_____0) I was responding to didn't mention the em dash in either of the two comments. GP used a comma instead of an em dash. GP only mentioned the comparison to ad copy. The em dash wasn't relevant in the subthread we're in. That's something you brought up.
This statement is incorrect, as the original post mentioned, "'it's not just x — it's y' format is the hallmark of mediocre articles being written/edited by AI." (note the quotes in the first post), and the next post said, "It's simply how literate people write."
All of this is beside the point, however, because your statement, "The point is people really did write text like that (no em dashes required) before ChatGPT existed," was never contended in this thread, and I do not think anyone has ever thought that ChatGPT created that phrase, so it just doesn't add to the discussion. Showing fewer than a dozen uses of the phrase (with or without the em dash) in a 15-year period just further proves that it was not a common written turn of phrase before ChatGPT.
>The point is people really did write text like that (no em dashes required) before ChatGPT existed.
OK, I think I can see your point, but at best it is irrelevant. At no point did the original poster imply that ChatGPT created the phrase, or that it wasn’t in spoken or written language before then. He just said that it is traditionally associated with late-night ads, and that the explosion in use of the phrase (especially with the em-dash) is most likely attributed to increased LLM use.
I'd give this the benefit of the doubt because the y section is more complex than I'd expect from AI. If it said "it's about the ubiquity of these tools", I'd agree it feels like AI slop, but "it's about what the ubiquity of these tools signals" has a deeper parse tree than I usually see in that negative parallelism structure.
The em-dash has been standard at jobs I had over the past 20 years. Not necessarily a fan of lack of separation on both sides of the punctuation but it's the normal style.
That we commonly used em-dashes as a mark to set off parenthetical information. Yes, you can also use parentheses and they're somewhat interchangeable.
> On the plus side, I guess we can thank AI for bringing back the humble em-dash.
It was always there, and used. It was just typically restricted to pretty formal, polished writing (I should know, I have coworkers who fuss over em and en spaces). I bet if you looked, you'd find regular use of em-dashes in Newsweek articles, going back decades.
The things LLMs did was inject it into unsophisticated writing. It's really only an LLM tell if it's overused or used in an unexpected context (e.g. an 8th-grader's essay, an email message).
I tend to insert space before and after on the very rare occasion I might use one . . . However I'm from the colonies and I've just learnt my preference is likely due to British influence.
I mostly just use a double hyphen in casual/lazy writing like emails (or HN comments :-)) but use an em-dash in anything more formal. En-dashes just seem pedantic and I don't really use them in general.
I’m quickly becoming convinced that humans adapt to ai content and quickly find it boring. It’s the same effect as walking through the renaissance section of an art museum or watching 10 action movies. You quickly become accustomed to the flavor of the content and move on. With human generated content, the process and limitations can be interesting - but there is no such depth to ai content.
This is fine for topics that don’t need to be exciting, like back office automation, data analysis, programming etc. but leads me to believe most content made for human consumption will still need to be human generated.
I’ve ceased using ai for writing assistance beyond spell check/accuracy/and as an automated reviewer. The core prose has to be human written to not sound like slop.
I've expected this same thing for a long time, it's the exact same phenomena as tv/movie cgi special effects looking dated - many that were amazing when the were released just look bad now because we've gotten used to them and can see when something is being faked with the old methods.
The population has been handed a shortcut machine and will give in to taking the path of least resistance in their tasks. It may be ironic but it's not surprising to see it used here.
That also stuck out for me, I was wondering if it was video games using openrouter for uptime / inference switching, video games would use a lot of tokens generating dialogue for a few programmer's villages.
As someone who appreciates machine learning, the main dissonance I have with interacting with Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you".
This appears everywhere, with every tool trying to autocomplete every sentence and action, creating a very clunky ecosystem where I am constantly pressing 'escape' and 'backspace' to undo some action that is trying to rewrite what I am doing to something I don't want or didn't intend.
It is wasting time and none of the things I want are optimized, their tools feel like they are helping people write "good morning team, today we are going to do a Business, but first we must discuss the dinner reservations" emails.
I broadly agree. They package "copilot" in a way that constantly gets in your way.
The one time I thought it could be useful, in diagnosing why two Azure services seemingly couldn't talk to each other, it was completely useless.
I had more success describing the problem in vague terms to a different LLM, than an AI supposedly plugged into the Azure organisation that could supposedly directly query information.
My 2 cents. It's when OKRs are executed without a vision, or the vision is that one and well, it sucks.
The goal is AI everywhere, so this means top-down everyone will implement it and will be rewarded for doing so, so thrre are incentives for each team to do it - money, promotions, budget.
100 teams? 100 AI integrations or more. It's not 10 entry points as it should be (maybe).
This means for a year or more, a lot of AI everywhere, impossible to avoid, will make usability sink.
Now, if this was only done by Microsoft, I would not mind. The issue is that this behavior is getting widespread.
You would think they would care about the fact that their brand is being torched but I guess they figure they're too big to need to care.
Their new philosophy is "the user is too stupid to even think for themselves LOL." It's not just their rhetoric, it's every single choice they've made screaming out their new priorities of which user respect is both last and least
I had the experience too. Working with Azure is already a nightmare, but the copilot tool built in to Azure is completely useless for troubleshooting. I just pasted log output into Claude and got actual answers. Mincrosoft’s first party stuff just seems so half assed and poorly thought out.
Why is this, I wonder? Aren't the models trained on about the same blob of huggingface web scrapes anyway? Does one tool do a better job of pre-parsing the web data, or pre-parsing the prompts, or enhancing the prompts? Or a better sequence of self-repair in an agent-like conversation? Or maybe more precision in the weights and a more expensive model?
their products are just just good enough to allow them to put a checkbox in a feature table to allow it to be sold to someone who will then never have to use it
but not even a penny more will be spent than the absolute bare minimum to allow that
this explains Teams, Azure, and everything else they make you can think of
How do you QA adding weird prediction tool to say Outlook. I have to use Outlook at one of my clients and have switched to writing all emails in VS Code and then pasting it to Outlook as “autocomplete” is unbearable… Not sure QA is possible with tools like these…
Part of QA used to be evaluating whether a change was actually helpful in doing the thing it was supposed to be doing.
... why, it's almost like in eliminating the QA function, we removed the final checks and balances on developers (read: PMs) from implementing whatever ass-backwards feature occurs to them.
Just in time for 'AI all the things!' directives to come down from on high.
exactly!! though evaluating whether a change was actually helpful in doing the thing it was supposed to be doing is hard when no one knows what it is supposed to be doing :)
I had a WTF moment last week, i was writing SQL, and there was no autocomplete at all. Then a chunk of autocomplete code appeared, what looked like an SQL injection attack, with some "drop table" mixed in. The code would have not worked, it was syntactically rubbish, but still looked spooky, should have made a screenshot of it.
This is the most annoying thing, and it's even happened to Jetbrains' rider too.
Some stuff that used to work well with smart autocomplete / intellisense got worse with AI based autocomplete instead, and there isn't always an easy way to switch back to the old heuristic based stuff.
You can disable it entirely and get dumb autocomplete, or get the "AI powered" rubbish, but they had a very successful heuristic / statistics based approach that worked well without suggesting outright rubbish.
In .NET we've had intellisense for 25 years that would only suggest properties that could exist, and then suddenly I found a while ago that vscode auto-completed properties that don't exist.
It's maddening! The least they could have done is put in a roslyn pass to filter out the impossible.
Loosely related: voice control on Android with Gemini is complete rubbish compared to the old assistant. I used to be able to have texts read out and dictate replies whilst driving. Now it's all nondeterministic which adds cognitive load on me and is unsafe in the same way touch screens in cars are worse than tactile controls.
I've been immensely frustrated by no longer being able to set reminders by voice. I got so used to saying "remind me in an hour to do x" and now that's just entirely not an option.
I'm a very forgetful person and easily distracted. This feature was incredibly valuable to me.
I got Gemini Pro (or whatever it's called) for free for a year on my new Pixel phone, but there's an option to keep Assistant, which I'm using.
Gotta love the enshittification: "new and better" being more CPU cycles being burned for a worse experience.
I just have a shortcut to the Gemini webpage on my home screen if I want to use it, and for some reason I can't just place a shortcut (maybe it's my ancient launcher that's not even in the play store anymore), so I have to make a tasker task that opens the webpage when run.
This is my biggest frustration. Why not check with the compiler to generate code that would actually compile? I've had this with Go and .Net in the Jetbrains IDE.
Had to turn ML auto-completion off. It was getting in the way.
There is no setting to revert back to the very reliable and high quality "AI" autocomplete that reliably did not recommend class methods that do not exist and reliably figured out the pattern I was writing 20 lines of without randomly suggesting 100 lines of new code that only disrupts my view of the code I am trying to work on.
I even clicked the "Don't do multiline suggestions" checkbox because the above was so absurdly anti-productive, but it was ignored
The most WTF moment for me was that recent Visual Studio versions hooked up the “add missing import” quick fix suggestion to AI. The AI would spin for 5s, then delete the entire file and only leave the new import statement.
I’m sure someone on the VS team got a pat on the back for increasing AI usage but it’s infuriating that they broke a feature that worked perfectly for a decade+ without AI. Luckily there was a switch buried in settings to disable the AI integration.
You can still use the older ML-model (and non-LLM-based!) IntelliCode completion suggestions - it’s buried in the VS Installer as an optional feature entirely separate from anything branded CoPilot.
The last time I asked Gemini to assist me with some SQL I got (inside my postgres query form):
This task cannot be accomplished
USING
standard SQL queries against the provided database schema. Replication slots
managed through PostgreSQL system views AND functions,
NOT through user-defined tables. Therefore,
I must return
Gemini weirdly messes things up, even though it seems to have the right information - something I started noticing more often recently. I'd ask it to generate a curl command to call some API, and it would describe (correctly) how to do it, and then generate the code/command, but the command would have obvious things missing like the 'https://' prefix in some case, sometimes the API path, sometimes the auth header/token - even though it mentioned all of those things correctly in the text summary it gave above the code.
I feel like this problem was far less prevalent a few months/weeks ago (before gemini-3?).
Using it for research/learning purposes has been pretty amazing though, while claude code is still best for coding based on my experience.
This is a great post. Next time that you see it, grab a screenshot, put on GitHub pages and post it here on HN. It will generated lots of interesting discussion about rubbish suggestions from poor LLM models.
This seems like what should be a killer feature: Copilot having access to configuration and logs and being able to identify where a failure is coming from. This stuff is tedious manually since I basically run through a checklist of where the failure could occur and there’s no great way to automate that plus sometimes there’s subtle typo type issues. Copilot can generate the checklist reasonably well but can’t execute on it, even from Copilot within Azure. Why not??
I have had great luck with ChatGPT trying to figure out a complex AWS issue with
“I am going to give you the problem I have. I want you to help me work backwards step by step and give me the AWS cli commands to help you troubleshoot. I will give you the output of the command”.
It’s a combination of advice that ChatGPT gives me and my own rubberducking.
that's what happens when everyone is under the guillotine and their lives depend on overselling this shit ASAP instead of playing/experimenting to figure things out
I've worked in tech and lived in SF for ~20 years and there's always been something I couldn't quite put my finger on.
Tech has always had a culture of aiming for "frictionless" experiences, but friction is necessary if we want to maneuver and get feedback from the environment. A car can't drive if there's no friction between the tires and the road, despite being helped when there's no friction between the chassis and the air.
Friction isn't fungible.
John Dewey described this rationale in Human Nature and Conduct as thinking that "Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned." He concludes:
”It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they are, or when taken universally.”
In "Mind and World", McDowell criticizes this sort of thinking, too, saying:
> We need to conceive this expansive spontaneity as subject to control from outside our thinking, on pain off representing the operations of spontaneity as a frictionless spinning in a void.
And that's really what this is about, I think. Friction-free is the goal but friction-free "thought" isn't thought at all. It's frictionless spinning in a void.
I teach and see this all the time in EdTech. Imagine if students could just ask the robot XYZ and how much time it'd free up! That time could be spent on things like relationship-building with the teacher, new ways of motivating students, etc.
Except...those activities supply the "wants and struggles whose consummations" build the relationships! Maybe the robot could help the student, say, ask better questions to the teacher, or direct the student to peers who were similarly confused but figure it out.
But I think that strikes many tech-minded folks as "inefficient" and "friction-ful". If the robot knows the answer to my question, why slow me down by redirecting me to another person?
This is the same logic that says making dinner is a waste of time and we should all live off nutrient mush. The purposes of preparing dinner is to make something you can eat and the purpose of eating is nutrient acquisition, right? Just beam those nutrients into my bloodstream and skip the rest.
Not sure how to put this all together into something pithy, but I see it all as symptoms of the same cultural impulse. One that's been around for decades and decades, I think.
People want the cookie, but they also want to be healthy. They want to never be bored, but they also want to have developed deep focus. They want instant answers, but they also want to feel competent and capable. Tech optimizes for revealed preference in the moment. Click-through rates, engagement metrics, conversion funnels: these measure immediate choices. But they don't measure regret, or what people wish they had become, or whether they feel their life is meaningful.
Nobody woke up in 2005 thinking "I wish I could outsource my spatial navigation to a device." They just wanted to not be lost. But now a generation has grown up without developing spatial awareness.
> Tech optimizes for revealed preference in the moment.
I appreciate the way you distinguish this from actual revealed preference, which I think is key to understanding why what tech is doing is so wrong (and, bluntly, evil) despite it being what "people want". I like the term "revealed impulse" for this distinction.
It's the difference between choosing not to buy a bag of chips at the store or a box of cookies, because you know it'll be a problem and your actual preference is not to eat those things, and having someone leave chips and cookies at your house without your asking, and giving in to the impulse to eat too many of them when you did not want them in the first place.
Example from social media: My "revealed preference" is that I sometimes look at and read comments from shit on my Instagram algo feed. My actual preference is that I have no algo feed, just posts on my "following" tab, or at least that I could default my view to that. But IG's gone out of their way (going so far as disabling deep link shortcuts to the following tab, which used to work) to make sure I don't get any version of my preference.
So I "revealed" that my preference is to look at those algo posts sometimes, but if you gave me the option to use the app to follow the few accounts I care about (local businesses, largely) but never see algo posts at all, ever, I'd hit that toggle and never turn it off. That's my actual preference, despite whatever was "revealed". That other preference isn't "revealed" because it's not even an option.
Just like the chips and cookies the costs of social meida are delayed and diffuse. Eating/scrolling feels good now. The cost (diminished attention span, shallow relationships, health problems) shows up gradually over years.
Yes i agree with this. I think more people, than not, would benefit from actively cultivating space in their lives to be bored. Even something as basic as putting your phone in the internal zip part of your bag, so when you're standing in line at the store/post office/whatever you can't be arsed to just reach for your phone and instead be in your head or aware of your surroundings. Both can be such wonderful and interesting places but we seem to forget that now
Plants "want" nitrogen, but dump fertilizer onto soil and you get algal blooms, dead zones, plants growing leggy and weak.
A responsible farmer is a steward of the local ecology, and there's an "ecology of friction" here. The fertilizer company doesn't say "well, the plants absorbed it."
But tech companies do.
There's something puritanical about pointing to "revealed preference" as absolution, I think. When clicking is consent then any downstream damage is a failure of self-control on the user's part. The ecological cost/responsibility is externalized to the organisms being disrupted.
Like Schopenhauer said: "Man kann tun, was er will, aber er kann nicht wollen, was er will." One can do what one wants, but one cannot will what one wants.
I wouldn't go as far as old Arthur, but I do think we should demand a level of "ecological stewardship". Our will is conditioned by our environment and tech companies overtly try to shape that environment.
I think that's partially true. The point is to have the freedom to pursue higher-level goals. And one thing tech doesn't do - and education in general doesn't do either - is give experience of that kind of goal setting.
I'm completely happy to hand over menial side-quest programming goals to an AI. Things like stupid little automation scripts that require a lot of learning from poor docs.
But there's a much bigger issue with tech products - like Facebook, Spotify, and AirBnB - that promise lower friction and more freedom but actually destroy collective and cultural value.
AI is a massive danger to that. It's not just about forgetting how to think, but how to desire - to make original plans and have original ideas that aren't pre-scripted and unconsciously enforced by algorithmic control over motivation, belief systems, and general conformity.
Tech has been immensely destructive to that impulse. Which is why we're in a kind of creative rut where too much of the culture is nostalgic and backward-looking, and there isn't that sense of a fresh and unimagined but inspiring future to work towards.
I don't think I could agree with you more. I think that more in tech and business should think about and read about philosophy, the mind, social interactions, and society.
ED Tech for example I think really seems to neglect the kind of bonds that people form when they go through difficult things together, and the pushing through difficulties is how we improve. Asking a robot xyz does not improve ourselves. AI and LLMs do not know how to teach, they are not Socratic pushing and prodding at our weaknesses and assessing us to improve. The just say how smart we are.
This is perhaps one of the most articulate takes on this I have ever read - thank-you!
And - for myself, it was friction that kickstarted my interest in "tech" - I bought a janky modem, and it had IRQ conflicts with my Windows 3 mouse at the time - so, without internet (or BBS's at that time), I had to troubleshot and test different settings with the 2-page technical manual that came with it.
It was friction that made me learn how to program and read manuals/syntax/language/framework/API references to accomplish things for hobby projects - which then led to paying work. It was friction not having my "own" TV and access to all the visual media I could consume "on-demand" as a child, therefore I had to entertain myself by reading books.
Friction is an element of the environment like any other. There's an "ecology of friction" we should respect. Deciding friction is bad and should be eradicated is like deciding mosquitoes or spiders or wolves are bad and should be eradicated.
Sometimes friction is noise. Sometimes friction is signal. Sometimes the two can't be separated.
I learned much the same way you did. I also started a coding bootcamp, so I've thought a lot about what counts as "wasted" time.
I think of it like building a road through wilderness. The road gets you there faster, but careless construction disturbs the ecosystem. If you're building the road, you should at least understand its ecological impact.
Much of tech treats friction as an undifferentiated problem to be minimized or eliminated—rather than as part of a living system that plays an ecological role in how we learn and work.
Take Codecademy, which uses a virtual file system with HTML, CSS, and JavaScript files. Even after mastering the lessons, many learners try the same tasks on their own computers and ask, "Why do I need to put this CSS file in that directory? What does that have to do with my hard drive?"
If they'd learned directly on their own machines, they would have picked up the hard-drive concepts along the way. Instead, they learned a simplified version that, while seemingly more efficient for "learning to code," creates its own kind of waste.
But is that to say the student "should" spend a week struggling? Could they spend a day, say, and still learn what the friction was there to teach? Yes, usually.
I tell everyone to introduce friction into their lives...especially if they have kids. Friction is good! Friction is part of the je ne sais quoi that make human's create
In my experience part of the 'frictionless' experience is also to provide minimal information about any issues and no way to troubleshoot. Everything works until it doesn't, and when it doesn't you are now at the mercy of the customer support que and getting an agent with the ability to fix your problem.
> but friction is necessary if we want to maneuver and get feedback from the environment
You are positing that we are active learners whose goal is clarity of cognition and friction and cognitive-struggle is part of that. Clarity is attempting to understand the "know-how" of things.
Tech and dare I say the natural laziness inherent in us instead wants us to be zombies being fed the "know-that" as that is deemed sufficient. ie the dystopia portrayed in the matrix movie or the rote student regurgitating memes. But know-that is not the same as know-how, and know-how is evolving requiring a continuously learning agent.
Looking at it from a slightly different angle, one I find most illuminating, removing "friction" is like removing "difficulty" from a game, and "friction free" as an ideal is like "cheat codes from the start" as an ideal. It's making a game where there's a single button that says "press here to win." The goal isn't the remove "friction", it's the remove a specific type of valueless friction, to replace it with valuable friction.
I don't know. You can be banging your head against the wall to demolish it or you can use manual/mechanical equipment to do so. If the wall is down, it is down. Either way you did it.
Thank you for expressing this. It might not be pithy but its something I've been thinking about a lot for a long time and this a well articulated way of expressing this
> ...Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you"
I feel like that describes nearly all of the "productivity" tools I see in AI ads. Sadly enough, it also aligns with how most people use it, in my personal experience. Just a total off-boarding of needing to think.
Sheesh, I notice I also just ask an assistant quite a bit rather than putting effort to think about things. Imagine people who drive everywhere with GPS (even for routine drives) and are lost without it, and imagine that for everything needing a little thought...
As an old school interface/interaction designer, I see this as a direct consequence of how the discipline of software design has evolved in the last decade or two.
We’ve went from conceiving of software as tools - constructs that enhance and amplify their user’s skills and capabilities - to magic boxes that should aim to do everything with just one button (and maybe even that is one action too many).
This shift in thinking is visible in how junior designers and product managers are trained and incentivized to think about their work. “Anticipating the user’s intent”, “providing a magical experience”, “making the simplest, most beautiful and intuitive product” - all things that are so routine parlance now that they sound trite, but that would make any software designer from the 80s/90s catatonic because of how orthogonal they are to good tool design.
To caricature a bit, the industry went from being run by people designing heavy machinery to people designing Disneyland rides. Disneyland rides are great and have their place, but you probably
don’t want your tractor to be designed like one.
>As someone who appreciates machine learning, the main dissonance I have with interacting with Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you".
This the nightmare scenario with AI, ie people settling for Microsoft/OpenAI et al to do the "thinking" for you.
It is alluring but of course it is not going to work. It is similar to what happened to the internet via social media, ie "kickback and relax, we'll give you what you really want, you don't have to take any initiative".
My pitch against this is to vehemently resist the chatbot-style solutions/interfaces and demand intelligent workspaces:
A world full of humans being guided by computers would be... dystopian.
Although I imagine a version where AI drives humans who mindlessly trust them to be more vegetarian or take public transport, helping save the environment (an ironic wish since AI is burning the planet). Of course "AI" is being guided by their owners, so there'd be a camp who uses Grok who'll still drive SUVs, eat meat, and be racist idiots...
Perhaps this is a feature and not a bug for MS. Every time you hit escape or accept, you're giving them more training samples. The more training data they can get you to give them, the better. So they WANT to be throwing out possibly irrelevant suggestions at every opportunity.
As much as I love JetBrains (IntelliJ and friends), I have the same feeling this year. The ratio that I undo an accidental tab/whatever far exceeds the accepted ones. I'm not anti-LLM -- they are great for many things, but I am tired of undoing shitting suggestions. Literally, many of them produce a syntax error. Please don't read this post as dumping on JetBrains. I still love their products.
> It is wasting time and none of the things I want are optimized, their tools feel like they are helping people write "good morning team, today we are going to do a Business, but first we must discuss the dinner reservations" emails.
No trolling: This is genius-level sarcasm. You do realise that most "business" emails are essentially this, right? Oh, right, you knew that already!
I agree. I am happiest just using plain Emacs for coding and every once in a while separately using an LLM or once or twice a day use gemini-cli or codex for a single task.
My comment is for coding, but same opinion for writing emails - once in a blue moon, then I will use a LLM manually.
You raise a good point. For specific programming tasks, I don't really want token-by-token suggestions in an IDE. And, like you, when I have a specific problem, e.g., "I need to do Kerberos auth like this in that language." -- I go to ask an LLM, and it is generally very useful. Then I look at the produced code and say: "Oh, that's how you do it." I almost never copy/paste the results from the LLM into my code base.
The disappointing thing is I’d rather them spend the time improving security but it sounds like all cycles are shoved into making AI shovels. Last year, the CEO promised security would come first but it’s not the case
Kagi reminds me of the original search engines of yore, when I could type what I want and it would appear, and I could go on with my work/life.
As for the people who claim this will create/introduce slop, Kagi is one of the few platforms where they are actively fighting against low quality AI generated content with their community fueled "SlopStop" campaign.[0]
Not sponsored, just a fan. Looking forward to trying this out.
Google has been stomping around like Godzilla this week, and this is the first time I decided to link my card to their AI studio.
I had seen people saying that they gave up and went to another platform because it was "impossible to pay". I thought this was strange, but after trying to get a working API key for the past half hour, I see what they mean.
Everything is set up, I see a message that says "You're using Paid API key [NanoBanano] as part of [NanoBanano]. All requests sent in this session will be charged." Go to prompt, and I get a "permission denied" error.
There is no point in having impressive models if you make it a chore for me to -give you my money-
First off, apologies for the bad first impression, the team is pushing super hard to make sure it is easy to access these models.
- On permission issue, not sure I follow the flow that got you there, pls email me more details if you are able too and happy to debug: Lkilpatrick@google.com
- On overall friction for billing: we are working on a new billing experience built right into AI Studio that will make it super easy to add a CC and go build. This will also come along with things like hard billing caps and such. The expected ETA for global rollout is January!
> I get the feeling GCP is not good for individuals like I.
Google isn't good for individuals at all. Unless you've got a few million followers or get lucky on HN, support is literally non-existent. Anyone that builds a business on Google is nuts.
I'd like to state the AWS, in contrast, has been great to me as an individual. The two times that I needed to speak to a human, I had one on the phone resolving my issue. And both issues were due to me making mistakes - on my small personal account.
Yes, it’s extremely complicated. I gave up on fire base for one project because I could not figure out how to get the right permissions set up and my support request resulted in someone copying and pasting a snippet from the instructions that I obviously had not understood in the first place.
It’s also extremely cumbersome to sign up for Google AI. The other day I tried to get deep seek working via Google’s hosted offering and gave up after about an hour. The form just would not complete without error and there was not a useful message to work with.
It would seem that in today’s modern world of AI assistance, Google could set up one that would help users do the simplest things. Why not just let the agent direct the user to the correct forms and let the user press submit?
Oh man, I've been playing with GCP's vertex AI endpoints, and this is so representative of my experience. It's actually bananas how difficult it is, even compared to other GCP endpoints
I was interested. I does look like he just needs to update that. His personal blog says google, and ex-openAI. But I do feel like I have my tin foil on every time I come to HN now.
Its not a new problem though, and its not just billing. The UI across Gemini just generally sucks (across AI Studio and the chat interfaces) and there's lots of annoying failure cases where Gemini will just timeout and stop working entirely midrequest.
Been like this for quite a while, well before Gemini 3.
So far I continue to put up with it because I find the model to be the best commercial option for my usage, but its amazing how bad modern Google is at just basic web app UX and infrastructure when they were the gold standard for such for like, arguably decades prior.
We are talking here about the most basic things- nothing AI related. Basic billing. The fact that it is not working says a lot about the future of the product and company culture in general (obviously they are not product-oriented)
Given how many paid offerings Google has, and the complexity and nuance to some of those offering (e.g. AdSense) I am pretty surprised that Google don't have a functioning drop in solution for billing across the company.
If they do, it's failing here. The idea of a penny pinching megacorp like Google failing technically even in the penny pinching arena is a surprise to me.
Even though my post complaining about google's billing and incoherent mess got so many upvotes, I'll be the first to say that there is nothing basic about "give me money".
Apart from the fact that what happens to the money when it gets to google (putting it in the right accounts, in the right business, categorizing it, etc), it changes depending on who you're ASKING for money.
1. Getting money from an individual is easy. Here's a credit card page.
2. Getting money from a small business is slightly more complicated. You may already have an existing subscription (google workspaces), just attach to it.
3. As your customers get bigger, it gets more squishy. Then you have enterprise agreements, where it becomes a whole big mess. There are special prices, volume discounts, all that stuff. And then invoice billing.
The point is that yes, we all agree that getting someone to plop down a credit card is easy. Which is why Anthropic and OpenAI (who didn't have 20 years of enterprise billing bloat) were able to start with the simplest use case and work their way slowly up.
But I AM sensitive to how hard this is for companies as large and varied as Google or MS. Remember the famous Bill Gates email where even he couldn't figure out how to download something from Microsoft's website.
It's just that they are also LARGE companies, they have the resources to solve these problems, just don't seem to have the strong leadership to bop everyone on the head until they make the billing simple.
And my guess is also that consumers are such a small part of how they're making money (you best believe that these models are probably beautifully integrated into the cloud accounts so you can start paying them from day one).
My first thought was this is the whole thing about managers at Google trying to get employees under other managers fired and their own reports promoted -- but it feels too similar to how fucked up all the account and billing stuff is at Microsoft. This is what happens when you try to "fix" something by layering on more complexity and exceptions.
From past experience, the advertising side of the business was very clear with accounts and billing. GCP was a whole other story. The entire thing was poorly designed, very confusing, a total mess. You really needed some justification to be using it over almost everything else (like some Google service which had to go through GCP.) It's kind of like an anti-sales team where you buy one thing because you have to and know you never want to touch anything from the brand ever again.
We made the bet 2 years ago to build AI Studio on top of the Google Cloud infra. One of the real challenges is that Google is extremely global, we support devs in hundreds of countries with dozens of different billing methods and the like. I wish the problem space was simple but on the first day I joined Google we kicked off the efforts to make sure we could bring billing into AI Studio, so January cannot come soon enough : )
No one should even notice the payment flow. This isn't Stripe where the polish on the payment experience is a selling point for the service. At Google, paying for something should be a boring but quick process that works and then gets out of the way.
It doesn't need to be good. It just need to be not broken.
That’s a pretty uncharitable take. Given the scale of their recent launches and amount of compute to make them work, it seems incredibly smooth. Edge cases always arise, and all the company/teams can really do is be responsive - which is exactly why I see happening.
A company with a literal embedded payment processor, including subscription services for half of all mobile users can't manage to take payments for their own public facing services seems like a huge fucking failure to me.
Especially for software developer and tech influencer focused markets.
Considering the product itself seems to be excessively limited without actually getting paid for it, and the paid tier itself having so many onboarding issues, as a critical usage path, it's pretty bad.
This is in a $3.6 Trillion company, for a product they're spending billions a quarter to develop, with specialized employees making mid 6-figure to 7-figure salaries and bonuses... you'd think somebody has the right connections into the departments that typically handle the payment systems.
My expectations shoot up dramatically for organizations that have all the funding they need to create something "insanely great" in terms of user experience the further they fall short... I don't know who the head of this group/project/department/product is... but someone failed at their job, and got payed excessively for this poor execution.
When we first started using Gemini for a new product a few months ago you banned our entire GCP account from using at all Gemini in the middle of a demo to our board. Doesn't seem like things have improved all that much on the on boarding front.
The new releases this week baited me into business ultra subscription. Sadly it’s totally useless for gemini 3 cli and now also nano banana does not work. Just wow.
I bought a Pro subscription (or the lowest tier paid plan, whatever it's called), and the fact that I had to fill out a Google Form in order to request access to get Gemini 3 CLI is an absolute joke. I'm not even a developer, I'm a UX guy who just likes playing around with seeing how models deal with importing Figma screens and turn them into a working website. Their customer experience is shockingly awful, worse than OpenAI and Anthropic.
Oh man, there is so, so much pain here. Random example - if GOOGLE_GENAI_USE_VERTEXAI=true in your environment, woe betide you if you're trying to use gemini cli with an API key. Error messages don't match up with actual problems, you'll be told to log in using the cli auth for google, then you'll be told your API keys have no access.. It's just a huge mess. I still don't really know if I'm using a vertex API key or a non-vertex one, and I don't want to touch anything since I somehow got things running..
Anyway vai com dios, I know that there's a fundamental level of complexity deploying at google, and deploying globally, but it's just really hard compared to some competitors. Sadly, because the gemini series is excellent!
Thank you for your service. Also, one of the main issues from the outside seems to me to be the impedance mismatch between google's entire cloud business and the direct-to-dev type API access oAI and Anthropic have pioneered. So, there's always going to be some pain here. It would be realllyy nice if that pain were incurred internally at the mothership, both for Dev UX and for sales. But so far, it looks to me like it's being spread out internally and externally.
Lol. Since the GirlsGoneWild people pioneered the concept of automatically-recurring subscriptions, unexpected charges and difficult-to-cancel billing is the game. The best customer is always the one that pays but never uses the service ... and ideally has forgotten or lost access to the email address they used when signing up.
Hi, is your team planning on adding a spending cap? Last I tried, there was no reasonable way to do this. It keeps me away from your platform because runaway inference is a real risk for any app that calls LLMs programatically.
The fact that your team is worrying about billing is...worrying. You guys should just be focused on the product (which I love, thanks!)
Google has serious fragmentation problems, and really it seems like someone else with high rank should be enforcing (and have a team dedicated to) a centralized frictionless billing system for customers to use.
We use Google Cloud's billing service, but given the super global nature of our customer base, there is a lot of complexity in moving this into AI Studio. Though we are making great progress!
Since 3 days I am trying to get a login to Antigravity and first there was trouble with an api now all I get is 'Your current account is not eligible for Antigravity. Try signing in with another personal Google account'. Even though it is verified and in a supported region...
This is nice that you know about the issue and are working on it. I really appreciate all the new "Get api key" buttons across google ai products that already makes it much easier than setting up a cloud project and getting credentials json files.
But I do think it's a general problem with Google products that the solution is always to build a new one. There are already like 8 ways to use and pay for Google AI and that adds to the complexity of getting set up, so adding a new simpler better option might make that all worse instead of better
Maybe if the sign up process encouraged people to send videos (screen-side and user-side could be useful also), of their sign-up and usage experience, the teams responsible for user experience could make some real progress.
I guess the question is, who cares, or who is responsible in the organization?
The permission thing happens to me too, but very intermittently, usually a couple of hard refreshes of the tab clears it up, sometimes I need to delete the conversation I'd just tried to start and start a new conversation. I can't remember the exact message, sometime like you don't have permission or permission denied. If I had to guess it happens 1 in 5 sessions I load. The API key stuff would be a lot easier if it landed you on the correct page in the GCP portal when it directs you out of AI studio, I think that is the most confusing part of the experience, you end up on what seems like a random GCP billing page with no clear indication as to what it has to do with API keys.
Pls email me if you can on the latter, we can update whatever pointers we have to the cloud console and make them more contextual. We are also pushing on the north star of the P99 user experience not needing to leave AI Studio. We have landed a lot of stuff to make this possible already!
Just make it a VSCode plugin, I don't want to install a new IDE (which is just VSCode anyway) to use your product. It might be better than claude and chatgpt5.1 but not better enough to justify me re-doing all my IDE configs.
Any chance that this reflected to our company account instead of AI Studio?
We want to switch to Gemini from Claude (for agentic coding, chat UI, and any other employee-triggered scenarios) but the pricing model is a complete barrier: How do we pay for a monthly subscription with a capped price?
You launched Antigravity, which looks like an amazing product that could replace Claude Code, but do I know I will be able to pay for it in the same way I pay Claude, which is a simple pay per month subscription?
I had the same reaction as them many months ago, the Google Cloud and Vertex AI stuff namespacing is a too messy. The different paths people might take to learning and trying to use the good new models needs properly mapping out and fixing so that the UX makes sense and actually works as they expect.
Hopefully the mobile version of AI Studio gets some improvement. There are some pretty awful UI bugs that make it really difficult to use in a mobile first manner.
Though I still managed to vibe code an app using nanobanana. Now I just need to sort API billing with it so I can actually use my app.
Dude. Let me give you my money. This isn’t rocket science. I don’t want anything to do with Google Cloud or Google Workspace or w/e it’s called now. Let me just subscribe to Gemini or Nano straight up.
Can we get free Nano Banana in AI studio at least in super low resolution? For app building and testing purposes it will be fine and cheap enough for you to make it possible?
Google APIs in general are hilariously hard to adopt. With any other service on the planet, you go to a platform page, grab an api key and you’re good to go.
Want to use Google’s gmail, maps, calendar or gemini api? Create a cloud account, create an app, enable the gmail service, create an oauth app, download a json file. Cmon now…
Don't forget the tradition of having to migrate to a new API after a while because this one gets deprecated for "reasons". Not just a newer version, but a complete non backwards compatible new API that also requires its own setup.
To be fair, that might have changed in recent years. But after having to deal with that a few times for a few hobby projects I simply stopped trying. Can't imagine how it is for companies making use of these APIs. I guess it provides work for teams on otherwise stable applications...
Yeah, I'm not a dev and not using AI at all but had a need to create oauth keys and some APIs for some project... sometimes it works sometimes it doesnt and it's so complicated...but got it working in the end, thos it stops working after some time, it was like, Google, really?
I know not accessible across all API's, but the point of AI Studio is you can sign up and we just make an API key for you automagically, no extra button clicks or the like.
If it's just the API you're interested in, Fal.ai has put Nano-Banana-Pro up for both generative and editing. A great deal less annoying to sign up for them since they're a pretty generalized provider of lots of AI related models.
In general a better option, in the early days of AI video I tried to generate a video of a golden retriever using Google's AI Studio. It generated 4 in the highest quality and charged me 36 bucks. Not a crazy amount but definitely an unwelcome suprise.
Fal.ai is pay as you go and has the cost right upfront.
There's the solution right there. Google is still growing its AI "sea legs". They've turned the ship around on a dime and things are still a little janky. Truly a "startup mode" pivot.
While we're on this subject of "Google has been stomping around like Godzilla", this is a nice place to state that I think the tide of AI is turning and the new battle lines are starting to appear. Google looks like it's going to lay waste to OpenAI and Anthropic and claim most of the market for itself. These companies do not have the cash flow and will have to train and build their asses off to keep up with where Google already is.
gpt-image-1 is 1/1000th of Nano Banana Pro and takes 80 seconds to generate outputs.
Two years ago Google looked weak. Now I really want to move a lot of my investments over to Google stock.
How are we feeling about Google putting everyone out of work and owning the future? It's starting to feel that way to me.
(FWIW, I really don't like how much power this one company has and how much of a monopoly it already was and is becoming.)
This is also my take on the market, although I also thought it looked like they were going to win 2 years ago too.
> How are we feeling about Google putting everyone out of work and owning the future? It's starting to feel that way to me.
Not great, but if one company or nation is going to come out on top in AI then every other realistic alternative at the moment is worse than Google.
OpenAI, Microsoft, Facebook/Meta, and X all have worse track records on ethics. Similarly for Russia, China, or the OPEC nations. Several of the European democracies would be reasonable stewards, but realistically they didn't have the capital to become dominant in AI by 2025 even if they had started immediately.
Valid questions, but I'd say that it's hard to know what the future holds when we get models that push the state of the art every few months. Claude sonnet 3.7 was released in February of this year. At the rate of change we're going, I wouldn't be surprised if we end up with Sonnet 5 by March 2026.
As others have noted, Google's got a ways to go in making it easier to actually use their models, and though their recent releases have been impressive, it's not clear to me that the AI product category will remain free from the bad, old fiefdom culture that has doomed so many of their products over the last decade.
We can't help but overreact to every new adjustment on the leader boards. I don't think we're quite used to products in other industries gaining and losing advantage so quickly.
Unfortunately, this is a fairly difficult task. In my experience, even SOTA models like Nano Banana usually make little to no meaningful improvement to the image when given this kind of request.
You might be better off using a dedicated upscaler instead, since many of them naturally produce sharper images when adding details back in - especially some of the GAN-based ones.
If you’re looking for a more hands-off approach, it looks like Fal.ai provides access to the Topaz upscalers:
They also offer (multiple; confusing product lineup!) interactive apps for upscaling video on their own website - Topaz Video and Astra. And maybe more, who knows.
I have access to the interactive apps, and there are a lot of knobs that aren't exposed in the Fal API.
edit: lol I found a third offering on the Topaz site for this, "Video upscale" within the Express app. I have no idea which is the best, despite apparently having a subscription to all of them.
I'm dimestore cheap, I'd be exploding to frames and sharpening and reassembling with a ffmpeg>irfanview process Lol. It would be awfully expensive to do it with an AI model and the results would be expensive. Would a photo/video editing suite do it? Google photos with a pro script, or Adobe premiere elements, or would you be able to do it in yourself in DaVinci resolve? Or are you talking hundreds of hours of video?
FYI that is an extremely challenging thing to do right. Especially if you care about accuracy and evidentiary detail. Not sure this is something that the current crop of AI tools are really tuned to do properly.
This is a good point. Some of the tools have a "creative mode" or "creativity" knob that hopefully drives this point home. But the simpler ones don't, and even with that setting dialed back it still has the same fundamental limitations/risks.
100% this. I am using the pro/max plans on both claude and openai. Would love to experiment with gemini but paying is next to impossible. Why do i need the risk of a full blown gcp project just to test gemini. No thx.
So much this. The entire experience around using Google's AI API's is a complete shit-show. I was (stubborn|obstinate|stupid|whatever) enough to keep dicking around until I actually got some stuff working (a few weeks ago) but I still feel dirty from the whole process. And I still don't know what I'm using (Gemini? AI Studio? Vertex? GCP? Other??) or how all of this crap relates.
And FSM forbid I have another time when my debit card number gets compromised and I have to try changing it with Google. That was even MORE painful than just trying to get things working in the first place. WTF am I editing, my GCP account or my Google account? Are those two different things? Yes? No? Sort of? But they're connected, somehow... right? I mean, I disable my card in one place, but find that billing is still trying to go to it anyway. And then I find another place on another Google page that mentions that card, but when I try to disable it I get some opaque error about "can't disable card because card is already in use. Disable card first" or whatever.
I can't even... I mean, shit. It's hard to imagine creating an experience that is that bad even if you were trying to do so.
Let me just say, I won't be recommending Google's AI API's, or GCP, or Vertex, or any of this stuff to anybody, anytime soon. I don't care how good their models are.
At least chatting with Gemini at gemini.google.com works. So far that's about the only thing AI related from Google I've seen that doesn't seem like a complete cluster-f%@k.
Ha, I have been steeling myself for a long chat with Claude about “how the F to get AI Studio up and working.” With paying being one of the hardest parts.
Without a doubt one essential ingredient will be, “you need a Google Project to do that.” Oh, and it will also definitely require me to Manage My Google Account.
For new users in AI Studio, we make a cloud project and key for you automatically. Hear you on the billing setup, we are working on it, landing in January!
It wasn't there when I first went to Gemini after the announcement, but upon revisiting it gave me the prompt to try Nano Banana Pro. It failed at my niche (rare palm trees).
Incredible technology, don't get me wrong, but still shocked at the cumbersome payment interface and annoyed that enabling Drive is the only way to save.
> at the cumbersome payment interface and annoyed that enabling Drive is the only way to save.
For the general audience, Gemini is the intended product, API and AI studio is for advanced users. Gemini is very easy to pay for. In Gemini, you can save all images as a regular browser download by clicking the top right of the image where it says "Download full size".
I hate that they kinda try to hide the model version. Like if you click the dropdown in the chat box, you can see that "Thinking" means 3 Pro. When you select the "Create images" tool, it doesn't tell you it's using Nano Banana Pro until it actually starts generating the image.
Tell me the model it's using. It's as if Google is trying to unburden me with the knowledge of what model does what but it's just making things more confusing.
Oh, and setting up AI Studio is a mess. First I have to create a project. Then an API key. Then I have to link the API key to the project. Then I have to link the project to the chat session... Come on, Google.
How long till ai studio is in the graveyard i wonder? For real google has some of the most amazing tech but jfc do they suck at making a product.
The only way i use google is via an api key which billing for is arcane to be charitable. How can billions not crack the problem of quickly accepting cash from customers? Surely their ads platform does this?
I have had an issue using Claude for research; it will often cite certain sources, and when I ask why the data it is using is not in the source it will apologize, do some more processing, and then realize that the claim is in a different source (or doesn't exist at all).
Still useful, but hopefully this gets ironed out in the future so I don't have to spend so much time vetting every claim and its associated source.
The top 10% is already propping up half of consumer spending[0]. People will have money to throw around, but the amount of people doing so is shrinking until we figure out a way to balance the income disparity and reverse that trend.
The author claims that they tried to avoid that: "[. . .] we had to choose them carefully and experiment to ensure that these documents were not already in the LLM training data (full disclosure: we can’t know for sure, but we took every reasonable precaution)."
Unless they are being ironic, using an AI accent with a statement like that for an article talking about the backlash to lazy AI use is an interesting choice.
It could have been human written (I have noticed that people that use them all the time start to talk like them), but the "its not just x — its y" format is the hallmark of mediocre articles being written / edited by AI.
reply