Hacker Newsnew | past | comments | ask | show | jobs | submit | ashleyn's commentslogin

It would be nice if health insurance were, well, insurance, and not some bastard mix of cost sharing and collective bargaining. The closest you get to "catastrophic only" insurance is an obamacare bronze and/or a high deductible plan with an HSA. Same service, same networks, but you pay less premiums and thus keep what you don't spend.

Health catastrophes are more likely than you may think, so I would suggest a HDHP+HSA at the very least. It's very difficult to self insure against a cancer diagnosis that may blow a million dollars in a year.

I'm a fairly high net worth individual with a high deductible plan. Setting aside the deductible amount in savings (often, tax free with the HSA) and keeping it every year you have good health is OP.


HDHP is good advice but doesn't save anything from the prices cited above. My HDHP Bronze plan is over 2k/month for 3 people.

Not to mention, this paradigm completely fails for almost anyone with an income that isn't above the 50th percentile.

> this paradigm completely fails for almost anyone with an income that isn't above the 50th percentile

I'm in Wyoming, and our threshold is under 200% the poverty line. That's $53,300 for a family of 3 [1]. Median household income–nationally–is $84k [2]. In Wyoming, it's $75k [3].

That's a gap. But it's a workable one.

[1] https://health.wyo.gov/healthcarefin/chip/doesmychildqualify...

[2] https://fred.stlouisfed.org/series/MEHOINUSA672N

[3] https://usafacts.org/answers/what-is-the-income-of-a-us-hous...


I don't know, I think there's some benefit to health insurance being more than insurance.

I think there's public-health benefits to subsidizing preventative/routine care, since

1. People are dumb and will decline to pay the $100-$300 it takes to decide whether something needs treating even if they can afford it

2. It's just kind of inhumane to make people struggling on the edge actually do the math on whether they should pay sticker price to get e.g. an ingrown nail treated or just wait and hope it doesn't get infected, costing them vastly more or losing them a toe, and that's even if the probabilistic and cost information was readily available.

3. Even if we assume a perfectly informed and rational populace, rational individual decisions aren't the same as rational societal decisions. For example, a lot of people actually probably shouldn't pay $1000 for a given vaccine, since their risk of infection is pretty low as long as enough other people are vaccinated and pretty high if enough other are not, whether or not they're vaccinated. However, across a society, paying ~$1,000 per head to get everyone vaccinated might be worth it to get to the break point where we go from 250 million lost workdays and 1 million deaths to 1 million lost workdays and 1 thousand deaths. And then if you're making 300 million vaccines instead of 500 thousand, you can probably get the price down to at least $500, maybe less.

Maybe these things shouldn't be a function of health insurance. Maybe we should just directly subsidize the specific care we want to be widely available. But a lot of other countries seem to have decided it makes sense to gather public health expenditure and cost-sharing into one umbrella also called "insurance," so I'm not convinced it would make that little sense for us.


Back in 2009 I remember reading about how dead salmon apparently turns up brain activity in fMRI without proper statistical methods. fMRI studies are something frequently invoked unscientifically and out of context.

https://www.wired.com/2009/09/fmrisalmon/


I think technically there's some statistical correction you apply to the voxels to avoid this. But yea...most hypotheses from fMRI are considered hypotheses until there's some other modality, i.e. electrical recordings, etc that confirm it.

i.e. the well regarded studies, i.e. Kanwisher and the visual processing areas, have follow up studies on primates and surgical volunteers w/ actual electrical activity correlating w/ visual stimuli etc


This argument hinges rather strongly on whether or not AI is going to create a broad, durable, and lasting unemployment effect.

Tractors did not cause this phenomenon because jevons paradox kicked in and induced demand rendered the problem moot, or demand eventually exceeded what mere tractors were capable of doing for agricultural productivity.

The same can probably be said for contemporary AI, but it's tough to tell right now. There's some scant indications we've scaled LLMs as far as they can go without another fundamental discovery similar to the attention paper in 2017. GPT-5 was underwhelming, and each new Claude Opus is an incremental improvement at best, still unable to execute an entire business idea from a single prompt. If we don't continue to see large leaps in capability like circa 2021-2022, then it can be argued jevons paradox will kick in here and at best LLMs will be a productivity multiplier for already experienced white collar workers - not a replacement for them.

All this being said, technological unemployment is not something that will be sudden or obvious, nor will human innovation always stay under jevons paradox, and I think policymakers need to seriously entertain taboo solutions for it sooner or later. Such as a WPA-style infrastructure project or basic income.


> technological unemployment is not something that will be sudden or obvious

I already have friends experiencing technological unemployment. Programmers suddenly need backup plans. Several designers I know are changing careers. Not to mention, the voiceover artist profession will probably cease to exist besides this last batch of known voices. Writer, editor - these were dependable careers for friends, once. A friend travelled the world and did freelance copyediting for large clients.

ChatGPT was just released three years ago.


People keep trying to tie these two things together, forgetting the fact that ZIRP also ended 3 years ago, and that combined with the end of the COVID-era employer credits are when the layoffs really began. I won't say LLMs are having no impact at all on employment, but not to the degree where the job pool has dried up. Companies were encouraged to over-hire for years, and now that the free money is gone, they're acting logically. I believe if ZIRP came back we'd see workforces expand again and AI would just be seen as another useful tool.

The mishandling of how they rewrote section 174 of the tax code also caused a lot of layoffs of developers.

Only in the US but ZIRP and redundancies have been worldwide

ZIRP, IRS Section 174, and irrationally exuberant over hiring caused the first few rounds of layoffs.

The layoffs you see now are due to offshoring disguised as AI taking over. Google, Amazon, and even Hollywood are getting in on the offshoring craze.


> Programmers suddenly need backup plans.

Yup, Claude Opus 4.5 + Claude Code feels like its teetering right on the edge of Jevon's Paradox. It can't work alone, and it needs human design and code review, if only to ensure it understands the problem and produces maintainable code. But it can build very credible drafts of entire features based on a couple of hours of planning, then I can spend a day reading closely and tweaking for quality. But the code? It's professional work, and I've worked with contractors who did a lot worse.

So right now? Opus 4.5 feels like an enormous productivity booster for existing developers (which may indirectly create unemployment or increase the demand for software enough to create jobs), but it can't work on large projects on an ongoing basis without a knowledgeable human. So it's more like a tractor than anything else: It might cause programmer unemployment, but eh, life happens.

But I can increasingly see that it would only take about one more breakthrough, and next gen AI models might make enormous categories of human intellectual labor about as obsolete as the buggy whip. If you could get a Stanford grad for a couple of dollars an hour, what would the humans actually do? (Manual labor will be replaced slower. Rod Brooks from the MIT AI Lab had a long article recently on state of robotics, and it sounds like they are still heavily handicapped by inadequate hardware: https://rodneybrooks.com/why-todays-humanoids-wont-learn-dex... )

Jevon's Paradox and comparative advantage won't protect you forever if you effectively create a "competitor species" with better price-performance across the board. That's what happened to the chimps and Homo neanderthalensis. And they didn't exactly see a lot of economic benefits from the rise of Homo sapiens, you know?


In my experience the code quickly becomes less than professional once the human stops monitoring what's going on.

"Inadequate hardware" is a truly ridiculous myth. The universal robot problem was, and is, and always will be an AI problem.

Just take one long look at the kind of utter garbage human mind has to work with. It's a frame that, without a hideous amount of wetware doing data processing, can't even keep its own limbs tracked - because proprioreception is made of wet meat noise and integration error. Smartphones in 2010 shipped with better IMUs, and today's smartphones ship with better cameras.

Modern robot frames just have a different set of tradeoffs from the human body. They're well into "good enough" overall. But we are yet to make a general purpose AI that would be able to do "universal robot" things. We can't even do it in a sim with perfect sensors and actuators.


Read Brooks' argument in detail, if you haven't. He has spent decades getting robots to play nicely in human environments, and he gets invited to an enormous number of modern robotics demonstrations.

His hardware argument is primarily sensory. Specifically, current generation robots, no matter how clever they might be, have a physical sensorium that's incredibly impoverished, about on par with a human with severe frostbite. Even if you try to use humans as teleoperators, it's incredibly awkward and frustrating, and they have to massively over-rely on vision. And fine-detail manual dexterity is hopeless. When you can see someone teleoperate a robot and knit a patterned hat, or even detach two stuck Lego bricks, then robots will have the sensors needed for human-level dexterity.


I did read it, and I found it so lacking that it baffles me to see people actually believe it to be a well-crafted argument.

Again: we can't even make a universal robot work in a sim with perfect sensor streams! If the issue was "universal robots work fine in sims, suffer in real world", then his argument would have had a leg to stand on. As is? It's a "robot AI caught lacking" problem - and ignoring the elephant in the room in favor of nitpicking at hardware isn't doing anyone a favor.

It's not like we don't know how to make sensors. Wrist-mounted cameras cover a multitude of sins, if your AI knows how to leverage them - they give you a data stream about as rich as anything a human gets from the skin - and every single motor in a robot is a force feedback sensor, giving it a rudimentary sense of touch.

Nothing stops you from getting more of that with dedicated piezos, if you want better "touchy-feely" capabilities. But do you want to? We are nowhere near being limited by "robot skin isn't good enough". We are at "if we made a perfect replica of a human hand for a robot to work with, it wouldn't allow us to do anything we can't already do". The bottleneck lies elsewhere.


The refrigerator put paid to the shipping-ice-from-the-arctic-circle industry quickly as well. The main shock is for the people who write stuff we read, as they never expected to be in a profession that could be automated away. Lots and lots of stuff has been automated away, but we never heard their voices.

I think it's too early for AI to have impacted software work at a systemic level. There are various reasons the market is crap right now, like how you're (perhaps unknowingly) competing with cheap foreign labor in your own metro centers for tech work.

AI is just the other pincer that will finish the kill shot.


> This argument hinges rather strongly on whether or not AI is going to create a broad, durable, and lasting unemployment effect.

I think GP's argument makes a pretty strong case that it won't, even if AI somehow successfully automates 99% of all currently existing tasks. We automated away 99% of jobs once during the agricultural revolution and it didn't result in "a broad, durable, and lasting unemployment effect" then. Quite the opposite in fact.

Maybe if AI actually automates 100% of everything then we'll need to think about this more. But that seems unlikely to happen anytime in the foreseeable future given the current trajectory of the technology. (Even 50% seems unlikely.)


> The same can probably be said for contemporary AI, but it's tough to tell right now

The same can't even be said for contemporary AI, because lots of the jobs it's going to replace are theoretical or hype. Self-driving cars should've been here years ago, but because AI is extremely hard to improve upon once it gets to a certain level of efficacy, they haven't happened.

The question is: should we be discussing this stuff when AI hasn't started taking all those jobs yet?


I think it's fine to discuss solutions to hypothetical future problems as long as it's clear that these are hypothetical future problems you're talking about, not present reality.

In many of these discussions that line seems to get blurred and I start to get the impression people are using the specter of a vague, poorly understood hypothetical future problem to argue for concrete societal changes now.


>The same can probably be said for contemporary AI, but it's tough to tell right now. There's some scant indications we've scaled LLMs as far as they can go without another fundamental discovery similar to the attention paper in 2017. GPT-5 was underwhelming, and each new Claude Opus is an incremental improvement at best, still unable to execute an entire business idea from a single prompt. If we don't continue to see large leaps in capability like circa 2021-2022, then it can be argued jevons paradox will kick in here and at best LLMs will be a productivity multiplier for already experienced white collar workers - not a replacement for them.

The NBA has an incredibly high demand for 14-foot-tall basketball players, but none have shown up to apply. Similarly, if this causes our economy to increase demand for people to "execute an entire business ide from a single prompt", it does not mean unemployment can be alleviated by moving all the jobless into roles like that.

We don't need science fiction AI that will put everyone out of work for it to be ruinous. We only need half-assed AI good enough that they don't want to pay a burgerflipper to flip burgers anymore, and it'll all go to hell.


When most of the human population were farmers should we have taxed advances in agriculture which destroyed the everybody’s job?


> each new Claude Opus is an incremental improvement at best, still unable to execute an entire business idea from a single prompt.

If your way of evaluating the progress of AI is a binary one, then you'll see no progress at all until suddenly it passes that bar.

But seeing that we do have incremental improvements on essentially all evals (and my own experience), even if it takes another decade we should be planning for it now. Even if it does require an entirely fundamental breakthrough like the attention paper, given the amount of researchers working on it, and capital devoted to it, I wouldn't put any money against such a breakthrough arriving before long.


Basic income doesn’t do anything. We already have food stamps and so on. The largest sector of US federal spending is health and social welfare. We’d have to end pretty much all those programs to run a minuscule basic income.

> We’d have to end pretty much all those programs to run a minuscule basic income

Isn't ending all those programs one of the core ideas of universal basic income? Instead of having a huge bureaucracy administering targeted social welfare you cut all the overhead and just pay everyone enough to exist, regardless of whether you actually need it. It'd still be more expensive, but giving people something dependable to fall back on would hopefully increase innovation and entrepreneurship, offsetting some of the costs


Okay so let’s divide the US federal budget by the number of people. So $21k per person. Now what happens to the guy who needs dialysis. It costs $60k. Right now the federal government pays. Now it’s given him a third the cost back. He just dies?

That’s a matter of where you get your taxes from. Plenty of corporations can afford to pay a more fair share. And studies on basic income have so far shown it to be effective.

> Plenty of corporations can afford to pay a more fair share

Can we stop pretending with the word "fair"? If you want to squeeze out more money then you do it by force. It's not "fair". It's just "we can do this".


If everything's automated then you don't need taxes to pay people.

Let me know when we live in The Culture, but I’ve got a feeling fully automated luxury gay space communism is a long ways off

Then what's the problem? AI is a problem (apparently) if everything is automated. Otherwise people have jobs and carry on as before.

Imagine a society that is halfway to that. So, say, there are only enough jobs for half of the people, but the rest still want to eat.

Studies on basic income have shown that it's harmful to the people who receive it.

They report no improvements on any measured outcome. Not lower stress, not more education, not better health. They work a bit less but that doesn't help them or their kids.

Over the long term it harms them because their productive skills, values, and emotional capacities atrophy away from lack of use.


> Studies on basic income have shown that it's harmful to the people who receive it.

That's extremely interesting, can you link such studies?


This podcast covers a bunch of it: https://www.youtube.com/watch?v=S5nj3DLvT64

It's one of those things that can be tricky to research because almost all the researchers and journalists on the topic very much don't want to see this conclusion. So there's a tremendous amount of misrepresentation and wishful reasoning about how to interpret the data. The truth comes out from actually reading the data, not researcher or journalist summaries.


"Final verdict on Finland's basic income trial: More happiness but little employment effect"

https://yle.fi/a/3-11337944 https://www.helsinki.fi/en/news/fair-society/universal-basic...

so basic income caused more happiness, less stress. but those are not profitable things, so, no basic income in finland.


What’s the alternative, if AI does turn out to be able to replace large swathes of the workforce? Just kill everyone?

You could ban it and then turn all existing employment into a makework jobs program, but this doesn’t seem sustainable: work you know is pointless is just as psychically corrosive, and in any event companies will just leave for less-regulated shores where AI is allowed.


what studies are those?

>Over the long term it harms them

Yes, but not for the reasons you state. It harms them because we have an zero desire as a society to effectively combat inflation, which negates any benefits we can give people who receive the basic income.

The powers-that-be don't take action to make sure the people who get basic income can actually use it to improve their lives. Food prices rapidly inflate, education costs skyrocket, medical costs increase exponentially almost overnight.

Much like how the government backstopping student loans basically got university costs to jump, promising to give people a basic income while not addressing the root causes of inequality and wealth disparity just makes things worse.

If you want basic income to truly work, you have to engage in some activities in the short term that are inherently un-capitalistic, although if done correctly, actually improve capitalism as a whole for society. Price controls and freezes, slashing executive pay, increasing taxes on the wealthiest, etc.


Whats the alternative? Kill off all humans replaced by AI unable to do something else for a living? Its sad enough that there are food stamps given the amount of food that regularly ends up in a dumpster on a daily basis. Humans come first, not machinery.

Nobody needs to kill anyone, people will just stop having kids which is what’s happening

Whats with the people already alive? If u continuously replace them with AI u need to support them in case of their inability to provide for themselves. Im afraid the worldwide available social security nets in place aren't made for withstanding this kind of unemployment.

They’ll have to adapt like every other generation has had to

My grandmother was born in 1924 and died in 2019 please appreciate how much change she had to adapt to over that period


Your grandma had plenty of opportunities in the post war eras. During her time there was always a need for human workers. While I dont think AI can actually replace anyone reliably, I still can see how executives buy into this promise and try it. This is a unique situation humanity never was confronted with. Even the industrialization required a lot of human work. If all white collar jobs went away there is a huge imbalance in available workers vs available work. Simply adapting to this isn't a thing given that monopolies killed competition and its not feasible for your everyday Joe to break into markets anymore. Kudos to your grandma for making it this long, simply not a comparable situation however.

Survivorship bias

What you’re not counting is all of the millions of people who died because they couldn’t actually adapt to the new world

Which is fine but they didn’t need to be killed, they just became irrelevant and went away


So you are the type of person that actively contributes to the world being as shit as it is. Good to know. Your disregard for the weak disgusts me. Have a good evening.

So what are you going to do about it? You should probably do something then

And when did work done by humans stop existing between 1924 and 1990? Because that's the type of change we are talking about.

Well considering that she had a bunch of secretaries doing typing for her as a bank manager then transitioned to a world where there were no typists anymore was a pretty explicit change from her perspective.

She never learned how to type on a keyboard so you do the math


Well the math is that the amount of jobs done by humans in that period of time is above zero.

The math is: her job disappeared so she had to retire to a low income housing unit funded by HUD in Houston

The people who own the magical AIs won't decide that they want to keep us all as pets, we won't have leverage to demand that they keep us all as pets, and they will have the resources to make sure they no longer need to keep us as pets. Shouting "You should keep humans as pets" is unlikely to change this fundamental equation.

>The largest sector of US federal spending is health and social welfare.

On old people who can't or don't work.


they will likely die first when society collapses

I think you're basing AI only on modern 2025 LLMs.

If there is a magnitude increase in compute (TPUs, NPUs, etc) over the next 3-5 years then even marginal increases in LLM usability will take white collar jobs.

If there is an exponential increase in power (fusion) and compute (quantum) combined with improvements in robotics and you're in the territory where humans can entirely be replaced in all industries (blue collar, white collar, doctors, lawyers, etc).


OTOH if there is worldwide catastrophic economic collapse due to climate change none of these things will get built.

In French we say "With "ifs" you can put Paris in a bottle."


Where does all the power come from? Compute increases have to has sustainable power source and we don’t have that.

Well, Mamdani wants to make transit free. Car taxes can probably help a lot to pay for that.

Partial correction: he wants to make buses free, but not subways.

There are modern VLIW architectures. I think Groq uses one. The lessons on what works and what doesn't are worth learning from history.

VLIW works for workloads where the compiler can somewhat accurately predict what will be resident in cache. It’s used everywhere in DSP, was common in GPU for awhile, and is present in lots of niche accelerators. It’s a dead end for situations where cache residency is not predictable, like any kind of multitenant general purpose workload.

IA64 was EPIC, which, itself, was a "lessons learned" VLIW design, in that it had things like stop bits to explicitly demarcate dependency boundaries so instructions from multiple words could be combined on future hardware with more parallelism, and speculative execution and loads, which, well, see the article on how the speculative loads were a mixed blessing.

https://en.wikipedia.org/wiki/Explicitly_parallel_instructio...


A more everyday example is the Hexagon DSP ISA in Qualcomm chips. Four-wide VLIW + SMT.

The new TI C2000 F29 series of microcontrollers are VLIW

I meant narrowly only about IA64. There is sure some lessons learned value.

Can't help but wonder if this was motivated in part by people feeding papers into LLMs for summary, search, or review. PDF is awful for LLMs. You're effectively pigeonholed into using (PAYING for) Adobe's proprietary app and models which barely hold a candle to Gemini or Claude. There are PDF-to-text converters, but they often munge up the formatting.


Not sure when you last tried, but Gemini, Claude, and ChatGPT have all supported pretty effective PDF input for quite a while.


I've always suspected this mentality had a lot to do with why Peter Thiel is like that. Growing up in the wreckage of the AIDS crisis and thinking to yourself, "I don't have to go down with them. I don't ever have to be like them. I'm still here, because I'm smarter, I'm better than them." I'd never admit any of this publicly, but I have a lot of similar thoughts as a trans woman who slipped through all the cracks and ended up wealthy in my thirties. Poverty is the tip of the discrimination spear and you really could buy your way out of it all.


Well at the very least he confirmed Regin continues to circulate.


He hasn't actually confirmed that the image he's processing is recent or if it was a test image and by "I found", he means he was able to find the thing that was known to be there. The Twitter thread has some people asking for clarification and none have been received yet.


Why do I get the feeling both of these are a diagnosis of exclusion, and there's not particularly any hard evidence that radiation is causing these uncommanded manoeuvres? How unlikely is it for Airbus models specifically to be vulnerable to radiation-induced data corruption twice in almost twenty years? Are there similar incidents recorded for Boeing jets?


Begs the question of how it escaped GM's clutches. All EV1s were leased with no buyout option, later recalled, and most crushed.


TFA says this is not the case, and mentions universities as some that received them at the end instead of the crusher.


Once it goes to auction is has a clean title.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: