Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This may be philosophically one of the most important graphs regarding the current state of AI! It shows that large language models already DO have an internal model of confidence in their statements, all that is needed is to train the models to output this confidence alongside their hallucinations if the confidence is low.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: