Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can't really dig into a model you don't control. At least by running locally, you could in theory if it is exposed enough.

The focused purpose, I think, gives it more of a "purpose built tool" feel over "a chatbot that might be better at some tasks than others" generic entity. There's no fake persona to interact with, just an algorithm with data in and out.

The latter portion is less a technical and more an emotional nuance, to be sure, but it's closer to how I prefer to interact with computers, so I guess it kinda works on me... If that were the limit of how they added AI to the browser.





Yes I agree with this, but the blog post makes a much more aggressive claim.

> Large language models are something else entirely. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn’t sit well.

Like I said, I'm all for local models for the exact reasons you mentioned. I also love the auditability. It strikes me as strange that the blog post would write off the architecture as the problem instead of the fact that it's not local.

The part that doesn't sit well to me is that Mozilla wants to egress data. It being an LLM I really don't care.


Exactly this. The black box in this case is a problem because it's not in my computer. It transfers the users data to an external entity that can use this data to train it's model or sell it.

Not everyone uses their browser just to surf social media, some people use it for creating things, log in to walled gardens to work creatively. They do not want to send this data to an AI company to train on, to make themselves redundant.

Discussing the inner workings of an AI isn't helping, this is not what most people really worry about. Most people don't know how any of it works but they do notice that people get fired because the AI can do their job.


Running locally does help get less modified output, bit how does it help escape the black box problem?

A local model will have fewer filters applied to the output, but I can still only evaluate the input/output pairs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: