Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Parts of this weekend is alloted for a local inference build. It genuinely looks interesting. This is kinda what I hoped for local llm scene would become: everything becomes modular and you just swap pieces you want or think would work well together.




This does not look interesting. This is AI slop.

Ok. Why it does not look interesting? It does seem to solve a problem. Have you actually looked into what it takes to build your own equivalent of ollama? It gets into fascinating trade offs real fast.

Because this is the output of "Hey cursor, write a memory store for AI agents." This is by no means an equivalent of ollama. I don't know where you got this from.

Check this out: https://github.com/CaviraOSS/OpenMemory/blob/17eb803c33db88a...


Admittedly, I don't have much exposure to cursor so I am taking your statement at face value ( as in, I don't see obvious relevant artifacts ). I am playing with stuff this weekend anyway so it just means I will be digging a little deeper now:D

How did you figure that out though, did you skim through the source code or was there some other tell?

I was pretty sure after reading that README, and skimming through source code confirmed, like you said, it literally has agent comments in there lol.

This is insane.

The comment in code literally says "# Wait, `get_vecs_by_sector` returns all vectors." :|


Adversarial review as a service incoming. Brave new world.

edit:

from gpt5.2 with prompt:

<< 'adversarial review request. please look at the github link for signs of being written by llm ( extra points if you can point to the llm that generated it ) https://github.com/CaviraOSS/OpenMemory'

>> I can’t prove it’s LLM-written from the outside, but the README (at least) has a lot of “LLM smell.” I’d put it at high likelihood of AI-assisted marketing/docs copy, with some sections bordering on “generated then lightly edited.”

but then it adds a list of style reason why it could be generated by llm

<< “Extra points”: which LLM wrote it?

Most likely: Claude 3.5 Sonnet–style output

<< if i were to point to comments in readme and code, what would you say upon re-review

>> Comments that narrate the obvious (especially line-by-line) >> Tutorial voice inside production code

**




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: