Hacker Newsnew | past | comments | ask | show | jobs | submit | msla's commentslogin

> If you want to send the net neutrality to /dev/null, please, head on.

What?


> While I expect the improvements to slow down and stop, due to the money running out

This will certainly happen with the models that use weird proprietary licenses, which people only contribute to if they're being paid, but open ones can continue beyond that point.


The hyperscalers are buying enough compute and energy to distort the market, enough money thrown at them to distort the US economy.

Open models, even if 100% of internet users joined an AI@HOME kind of project, don't have enough resources to do that.


You are right, machine learning models usually improve with more data and more parameters. Open model will never have enough resources to reach a compatible quality.

However, this technology is not magic, it is still just statistics, and during inference it looks very much like your run of the mill curve fitting (just in a billion parameter space). An inherent problem in regression analysis is that at some point you have too many parameters, and you are actually fitting the random errors in your sample (called overfitting). I think this puts an upper limit on the capabilities of LLMs, just like it does for the rest of our known tools of statistics.

There is a way to prevent that though, you can reduce the number of parameters and train a specialized model. I would actually argue that this is the future of AI algorithms and LLMs are kind of a dead end with usefulness limited to entertainment (as a very expensive toy). And that current applications of LLMs will be replaced by specialized models with fewer parameters, and hence much much much cheaper to train and run. Specialized models predate LLMs and we have a good idea of how an open source model fares in that market.

And it turns out, open source specialized models have proven them selves quite nicely actually. In go we have KataGo which is one of the best models on the market. Similarly in chess we have Stockfish.


What a useless piece of snark.


> Sixel support unfortunately came to terminals in 01988

Then why do I have it now? Time travel?


These days American exceptionalism generally comes from people believing America is exceptionally bad.

Of course you can. People sell what they don't have all the time.

Your cars are computerized death traps? Advertise their reliability.

Your OS sends endless information back home to Apple? Advertise its privacy features.

Your AI hallucinates? Advertise how useful it is for summarizing data.


"We don't care, we don't have to, we're Intel."

Plus, DEC managed to move all of its VAX users to Alpha through the simple expedient of no longer making VAXen, so I wonder if HP (which by that point had swallowed what used to be DEC) thought it could repeat that trick and sunset x86, which Intel has wanted to do for very nearly as long as the x86 has existed. See also: Intel i860

https://en.wikipedia.org/wiki/Intel_i860


The 8086 was a stop-gap solution until iAPX432 was ready.

The 80286 was a stop-gap solution until iAPX432 was ready.

The 80386 started as a stop-gap solution until iAPX432 was ready, until someone higher up finally decided to kill that one.


https://en.wikipedia.org/wiki/Intel_iAPX_432

I'd never heard of it myself, and reading that Wikipedia page it seems to have been a collection of every possible technology that didn't pan out in IC-language-OS codesign.

Meanwhile, in Britain a few years later in 1985, a small company and a dedicated engineer, Sophie Wilson, decided that what they needed was a RISC processor that was as plain and straightforward as possible ...


IA64 was EPIC, which, itself, was a "lessons learned" VLIW design, in that it had things like stop bits to explicitly demarcate dependency boundaries so instructions from multiple words could be combined on future hardware with more parallelism, and speculative execution and loads, which, well, see the article on how the speculative loads were a mixed blessing.

https://en.wikipedia.org/wiki/Explicitly_parallel_instructio...


In case someone hasn't heard:

https://en.wikipedia.org/wiki/Itanium

> In 2019, Intel announced that new orders for Itanium would be accepted until January 30, 2020, and shipments would cease by July 29, 2021.[1] This took place on schedule.[9]


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: