This sparked a really fascinating discussion, I don't know if anyone will see this but thanks everyone for sharing your thoughts :)
I understand your point - to an LLM there's no meaningful difference between once turing complete language and another. I'll concede that I don't have a counter argument, and perhaps it doesn't need to be prolog - though my hunch is that LLM's tend to give better results when using purpose built tools for a given type of problem.
The only loose end I want to address is the idea of "doing reasoning."
This isn't an AGI proposal (I was careful to say "good at writing prolog") just an augmentation that (as a user) I haven't yet seen applied in practice. But neither have I seen it convincingly dismissed.
The idea is the LLM would act like an NLP parser that gradually populates a prolog ontology, like building a logic jail one brick at a time.
The result would be a living breathing knowledge base which constrains and informs the LLM's outputs.
The punchline is that I don't even know any prolog myself, I just think it's a neat idea.
I understand your point - to an LLM there's no meaningful difference between once turing complete language and another. I'll concede that I don't have a counter argument, and perhaps it doesn't need to be prolog - though my hunch is that LLM's tend to give better results when using purpose built tools for a given type of problem.
The only loose end I want to address is the idea of "doing reasoning."
This isn't an AGI proposal (I was careful to say "good at writing prolog") just an augmentation that (as a user) I haven't yet seen applied in practice. But neither have I seen it convincingly dismissed.
The idea is the LLM would act like an NLP parser that gradually populates a prolog ontology, like building a logic jail one brick at a time.
The result would be a living breathing knowledge base which constrains and informs the LLM's outputs.
The punchline is that I don't even know any prolog myself, I just think it's a neat idea.