Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We benchmark humans with these tests -- why would we not do that for AIs?

The implications for society? We better up our game.



> The implications for society? We better up our game.

If only the horses had worked harder, we would never have gotten cars and trains.


> We benchmark humans with these tests – why would we not do that for AIs?

Because the correlation between the thing of interest and what the tests measure may be radically different for systems that are very much unlike humans in their architecture than they are for humans.

There’s an entire field about this in testing for humans (psychometry), and approximately zero on it for AIs. Blindly using human tests – which are proxy measures of harder-to-directly-assess figures of merit requiring significant calibration on humans to be valid for them – for anything else without appropriate calibration is good for generating headlines, but not for measuring anything that matters. (Except, I guess, the impact of human use of them for cheating on the human tests, which is not insignificant, but not generally what people trumpeting these measures focus on.)


There is also a lot of work in benchmarking for AI as well. This is where things like Resnet come from.

But the point of using these tests for AI is precisely the reason we use for giving them to humans -- we think we know what it measures. AI is not intended to be a computation engine or a number crunching machine. It is intended to do things that historically required "human intelligence".

If there are better tests of human intelligence, I think that the AI community would be very interested in learning about them.

See: https://github.com/openai/evals


> The implications for society? We better up our game.

For how long can we better up our game? GPT-4 comes less than half a year after ChatGPT. What will come in 5 years? What will come in 50?


Progress is not linear. It comes in phases and boosts. We’ll have to wait and see.


Check on the curve for flight speed sometime, and see what you think of that, and what you would have thought of it during the initial era of powered flight.


Powered flight certainly progressed for decades before hitting a ceiling. At least 5 decades.

With GPT bots, the technology is only 6 years old. I can easily see it progressing for at least one decade.


Maybe a different analogy will make my point better. Compare rocket technology with jet engine technology. Both continued to progress across a vaguely comparable time period, but at no point was one a substitute for the other except in some highly specialized (mostly military-related) cases. It is very clear that language models are very good at something. But are they, to use the analogy, the rocket engine or the jet engine?


Exponential rise to limit (fine) or limitless exponential increase (worrying).


Without exponential increase in computing resources (which will reach physical limits fairly quickly), exponential increase in AI won’t last long.


I don't think this is a given. Over the past 2 decades, chess engines have improved more from software than hardware.


I doubt that that’s a sustained exponential growth. As far as I know, there is no power law that could explain it, and from a computational complexity theory point of view it doesn’t seem possible.


See https://www.lesswrong.com/posts/J6gktpSgYoyq5q3Au/benchmarki.... The short answer is that linear elo growth corresponds roughly linearly to linear evaluation depth, but since the game tree is exponential, linear elo growth scales with exponential compute. The main algorithmic improvements are things that let you shrink the branching factor, and as long as you can keep shrinking the branching factor, you keep getting exponential improvements. SF15 has a branching factor of roughly 1.6. Sure the exponential growth won't last for ever, but it's been surprisingly resilient for at least 30 years.


It wouldn’t have been possible if there hadn’t been an exponential growth in computing resources over the past decades. That has already slowed down, and the prospects for the future are unclear. Regarding the branching factor, the improvements certainly must converge towards an asymptote.

The more general point is that you always end up with an S-curve instead of a limitless exponential growth as suggested by Kaibeezy. And with AI we simply don’t know how far off the inflection point is.


Expecting progress to be linear is a fallacy in thinking.


Sometimes it's exponential. Sometimes it's sublinear.


Sometimes it's exponential over very short periods. The fallacy is in thinking that will continue.


We should take better care of humans who are already obsolete or soon become obsolete.

Because so far we are good only at criminalizing and incarcerating or killing them.


Upping our game will probably mean an embedded interface with AI. Something like Neurolonk.


Not sure if an intentional misspelling but I think I like Neurolonk more


Eventually there will spring up a religious cult of AI devotees and they might as well pray to Neurolonk.


Lol, unintentional


I know it's pretty low level on my part, but I was amused and laughed much more than I care to admit when I read NEUROLONK. Thanks for that!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: