Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We’re rapidly approaching problems (AP Calculus BC, etc) that are in the same order of magnitude of difficulty as “design and implement a practical self-improving AI architecture”.

Endless glib comments in this thread. We don’t know when the above prompt leads to takeoff. It could be soon.



And funnily enough, with the AI community’s dedication to research publications being open access, it has all the content it needs to learn this capability.

“But how did skynet learn to build itself?”

“We showed it how.”


Since when was AP Calculus BC on the same order of magnitude as "design and implement a practical self-improving AI architecture"?


Assuming the range of intelligence spanning all the humans that can pass Calculus BC is narrow on the scale of all possible intelligences.

It’s a guess, of course. But, the requisite concepts for getting Transformers working are not much broader than calculus and a bit of programming.


Since when was "design and implement a practical self-improving AI architecture" on the same level as knowing "the requisite concepts for getting Transformers working"?


this is such garbage logic. the semantics of that comment are irrelevant. creating and testing AI node structures is well within the same ballpark. even if it wasnt, the entire insinuation of your comment is that the creation of AI is a task that is too hard for AI or for an AI we can create anytime soon -- a refutation of the feedback hypothesis. well, thats completely wrong. on all levels.


Sorry, what is the "feedback hypothesis"? Also, despite my use of quotes, I'm not arguing about semantics.


We can't predict what is coming. I think it probably ends up making the experience of being a human worse, but I can't avert my eyes. Some amazing stuff has and will continue to come from this direction of research.


I passed Calculus BC almost 20 years ago. All this time I could have been designing and implementing a practical self-improving AI architecture? I must really be slacking.


In the broad space of all possible intelligences, those capable of passing calc BC and those capable of building a self-improving AI architecture might not be that far apart.


hey, im very concerned about AI and AGI and it is so refreshing to read your comments. over the years i have worried about and warned people about AI but there are astonishingly few people to be found that actually think something should be done or even that anything is wrong. i believe that humanity stands a very good chance of saving itself through very simple measures. i believe, and i hope that you believe, that even if the best chance we had at saving ourselves was 1%, we should go ahead and at least try. in light of all this, i would very much like to stay in contact with you. ive connected with one other HN user so far (jjlustig) and i hope to connect with more so that together we can effect political change around this important issue. ive formed a twitter account to do this, @stop_AGI. whether or not you choose to connect, please do reach out to your state and national legislators (if in the US) and convey your concern about AI. it will more valuable than you know.


That's a pretty unfair comparison. We know the answers to the problems in AP Calculus BC, whereas we don't even yet know whether answers are possible for a self-improving AI, let alone what they are.


> Endless glib comments in this thread.

Either the comments are glib and preposterous or they are reasonable and enlightening. I guess they are neither but our narrow mindedness makes it so?


A few hundred people on Metaculus are predicting weakly general AI to be first known around September 2027: https://www.metaculus.com/questions/3479/date-weakly-general...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: