An example of why a basic understanding is helpful:
A common sentiment on HN is that LLMs generate too many comments in code.
But comment spam is going to help code quality, due to the way causal transformers and positional encoding works. The model has learned to dump locally-specific reasoning tokens where they're needed, in a tightly scoped cluster that can be attended to easily, and forgetting about just as easily later on. It's like a disposable scratchpad to reduce the errors in the code it's about to write.
The solution to comment spam is textual/AST post-processing of generated code, rather than prompting the LLM to handicap itself by not generating as much comments.
I'm surprised there isn't a politician who makes this their brand. I would vote for them even if they didn't want to do anything else.
The politicians only talk about regulating content, instead of regulating the algorithm. An error across all dimensions - politically, pragmatically, legally.
I would do these 2 things:
(1) ban all recommendation engines in social media, no boosting by likes, no retweets, no "for you", no "suggested". you get a chronological feed of people you follow, or you search for it directly.
(2) ban all likes/upvotes showing up on public posts, to reduce the incentive for people to engage in combat on politically charged topics
No impact on free speech, everyone still has a voice. No political favoritism. No privacy violations.
I would bet only these tweaks will significantly reduce extremism and unhappiness in society.
This is something I've personally explored and lightly researched. I think the general population generally prefers recommendation algorithms (they espouse how great _their_ for-you page is on tik-tok or how spotify suggests the best music).
You would also be combating against ad and social media companies with extremely deep pockets. You have to keep in mind that algorithmic sorting also would impact search engines like Google and a ton of shopping websites.
I personally think the way this has to be done is something more fundamental and "grassroots-like". Similar to how a significant chunk of the internet are against "AI content" I think that same group of people need to be shown that this algorithmic recommendation brainrot is impacting society considerably.
edit: To take this point further, as an American, I have been wondering why people would disagree on basic principals or what feels like facts. The problem is that their online experience is completely different than mine. No two people share an exact same home page for any service. How are you supposed to get on the same page as someone when they live in a practically different world than you?
> I think the general population generally prefers recommendation algorithms
Not really. It's a dopamine addiction, like a gambling addict 'preferring' that a casino is nearby. But they know it makes them miserable. That's why people would pay money to quit.
I definitely don't disagree there! I think I am on the same page as you as far as goals. I just am unfortunately a bit more jaded and pessimistic about the unending reach of these platforms.
The timidity and lack of vision from politicians everywhere is a disgrace. All it would take is one successful case study in one country, and most other countries would follow.
That's not really what the survey said. In fact, it found that the overwhelming majority of users would pay good money to continue using those platforms.
> The answers suggest users value these platforms a lot, on average by US$59 per month for TikTok and $47 for Instagram. An overwhelming 93 per cent of TikTok users and 86 per cent of Instagram users would be prepared to pay something to stay on them.
$59/month was the average claim for how much they'd pay to stay on TikTok.
They even cite other studies that came up with similar numbers, so it's not a fluke.
The part about paying to be off of them was about a hypothetical scenario where everyone on their campus agreed to some deal where they all stopped using one of the platforms together at the same time.
That's how they arrived at those weird numbers for paying to quit as a group. Like all studies that ask hypothetical questions about how much people would pay for some outcome, the real world value is always less. When you start introducing impossible constraints like "everyone else would quit" it becomes even more disconnected from reality.
Given the makeup of the courts is the US, I can't help but imagine these hypothetical laws would be thrown out on first amendment grounds. Viz. "Our algorithm is our free speech"
I think 1 and 2 will destroy social media as a frenetic place where everyone is competing for attention. It will become boring without all the battles, pile-ons, gore and porn being shoved in your face. People will sometimes check in to see what Obama said. That's about it. At least that's my hope.
> (1) ban all recommendation engines in social media, no boosting by likes, no retweets, no "for you", no "suggested". you get a chronological feed of people you follow, or you search for it directly.
I always find these comments interesting on Hacker News. The Hacker News front page is a socially sourced recommendation engine which presents stories in an algorithmic feed, as boosted by likes (upvotes) from other users. The comment section where we're talking is also social at it's core, with comments boosted or driven down by upvotes and downvotes.
In your proposed regulation, are you really expecting that the Hacker News front page would go away, replaced only by the "new" feed? Or that we'd have to manually sign up to follow different posters?
If we have to sign up to follow specific posters, how do you propose we discover them to begin with?
Usually when I ask these questions the follow ups involve some definition of social media that excludes Hacker News and other forums that people enjoy.
the hn front page is the same for all users --- on ig, im happy to see my friends' posts, but i really dont need the slurry of palantir-chosen brainrot/racist reels interspersed in there, lol (and that applies to most social media).
Yes, it's an algorithmic feed that treats all active users as your friends. Stories are still boosted by votes, sorted algorithmically, and ordered by an opaque algorithm. It would fall under the ban described above.
> on ig, im happy to see my friends' posts, but
Yes, but how would that work on HN? You see no stories until you start friending people? How would you discover people if recommendation engines weren't allowed?
i'd say it's less predatory for all users to have the same algorithm. maybe on HN, the userbase is small enough, and the articles generally focused enough, that it'd be less impactful were the algorithm somewhat divergent per user. but on other platforms, rabbitholes appear very quickly, and very inorganically. to be plain, i've liked a number of pro-palestine posts on instagram, and started getting very anti-semitic reels until i hit "not interested" a certain number of times. the algorithm is opaque, but also stupid, and motivated to aggravate me into commenting, scrolling more, etc., to view ads. i don't know if i have a way to categorize HN into "good" and ig/X/... into "bad", to be honest.
for what it's worth, discord doesn't really have a user algorithm to get people into certain servers, and yet people are readily radicalized on discord (especially to the far-right, in my experience), but obviously the way people interact on discord is different to social media.
Imagine you have a caching library that handles DB fallback. A cache that should be there but goes missing is arguably an issue.
Should if throw an exception for that to let you know, or should it gracefully fallback so your service stays alive ? The middle ground is leaving a log and chugging along, your proposition throws that out of the window.
I'm uncertain that programming will be a major profession in 10 years.
Programming is more like math than creative writing. It's largely verifiable, which is where RL is repeatedly proven to eventually achieve significantly better than human intelligence.
Our saving grace, for now, is that it's not entirely verifiable because things like architectural taste are hard to put into a test. But I would not bet against it.
There are two types of broken clocks. One is always suspicious, the other thinks nothing ever happens. One is more often right than the other, but both are equally broken.
There's a world of difference between "nothing happens with 100% probability" and "nothing happens with 98% probability", even though they can look like the same thing when talking casually.
The arguments against seemed to show the weakness of frequentists in practical deduction..
It certainly could have been a domestic where the police already knew but weren't communicating it but otherwise most suggestions didn't match the statistics of the specific demographic/locations to compete with the anomaly of a total AWOL with good luck, nothing to lose and a connection to top academia who wasn't dead or in jail.
Second this but for the chat subscription. Whatever they did with 5.2 compared to 5.0 in ChatGPT increased the test-time compute and the quality shows. If only they would allow more tokens to be submitted in one prompt (it's currently capped at 46k for Plus). I don't touch Gemini 3.0 Pro now (am also subbed there) unless I need the context length.
> I don't see this as a big deal other than my fear of west china invading china (taiwan! :) ).
Isn't that "other than" clause a big deal, though? I've read a survey and a number of articles from defense and foreign policy types, and the general feeling is there's a ~25% chance that China will invade Taiwan this decade. That's really damn big. If there's rollback in Taiwan then the first island chain could plausibly fall, or if not you will surely see Japan and maybe South Korea nuclearize. Why must we keep assuming the best with these security calculations instead of believing someone when they keep saying what they're going to do?
> The political will in PRC to "kill other Chinese" is zero.
Counts for nothing, these narratives are built on sand. Russians also saw Ukrainians as "brothers", as did South/North Koreans before the war, among countless other examples.
Please spare us. China invaded Vietnam to protect Pol Pot while he was mass killing millions of innocent civilians. They have territorial disputes with over 10 countries, which they've been unable to decisively act on because those neighbors either have nukes (India) or are protected by a more powerful country (US). Not because their government is some benevolent entity. They're basically an authoritarian dictatorship that's kind of cornered at the moment (like Saddam after the Gulf War) but would kill a bunch of people and expand if the US wasn't around.
China has resolved a lot of its border disputes already. The border disputes with Kazakhstan, Krgyzstan, Laos, Mongolia, Nepal, North Korea, Russia, Vietnam, Tajikstan have all been resolved
I have sent the same prompt to GPT-5.2 Thinking and Gemini 3.0 Pro many times because I subscribe to both.
GPT-5.2 Thinking (with extended thinking selected) is significantly better in my testing on software problems with 40k context.
I attribute this to thinking time, with GPT-5.2 Thinking I can coax 5 minutes+ of thinking time but with Gemini 3.0 Pro it only gives me about 30 seconds.
The main problem with the Plus sub in ChatGPT is you can't send more than 46k tokens in a single prompt, and attaching files doesn't help either because the VM blocks the model from accessing the attachments if there's ~46k tokens already in the context.
A common sentiment on HN is that LLMs generate too many comments in code.
But comment spam is going to help code quality, due to the way causal transformers and positional encoding works. The model has learned to dump locally-specific reasoning tokens where they're needed, in a tightly scoped cluster that can be attended to easily, and forgetting about just as easily later on. It's like a disposable scratchpad to reduce the errors in the code it's about to write.
The solution to comment spam is textual/AST post-processing of generated code, rather than prompting the LLM to handicap itself by not generating as much comments.
reply