Very nice! A key attraction of pure Prolog code is indeed that it can be evaluated in different ways to obtain better termination and performance characteristics, and to derive more semantic consequences of programs. The equation "Algorithm = Logic + Control" indicates this ability to provide and use different ways of control to evaluate the logical meaning of Prolog code.
These slides seem to focus primarily, and in fact exclusively, on an important control mechanism called SLDNF resolution (SLD resolution with negation as failure):
There are other ways to execute Prolog code too though, notably SLG resolution. SLG resolution is also called tabling, and it is provided by an increasing number of Prolog implementations and also in different variations. SLG resolution is a control strategy that terminates in many cases where SLDNF resolution does not terminate.
For example, the Prolog predicate
a :- a.
terminates (and fails) with SLG resolution, but not with SLDNF resolution.
Note that Prolog is restricted to least Herbrand models. This is different from answer set programming (ASP), where you get all models. For example, the above program has two models in ASP (and in logic) namely one where a is true and one where a is false, but only one least Herbrand model (where a is false).
A common criticism of Prolog is that is "confined to depth first search", but that is not the case: You can reason about Prolog code in many different ways, and also execute it with iterative deepening, for example. This is in fact a major attraction of Prolog, not available in most other programming languages.
However, the more impure constructs you use, the more this flexibility shrinks. The ability to use different control mechanisms is a strong argument to use, as far as possible, the pure monotonic core of Prolog.
I am glad to see you respond here! It was one of your comments here on HN that brought me back to PROLOG and your homepage was a great inspiration to me.
Does SLDNF really not terminate in cases that produce a cyclic answer or is it the subsequent step that replaces instantiated variables with values that will not terminate? E.g. the step that prints frames? In the latter case, would an occurs check solve the problem?
How is tabling different from SLD with memoization? If this is too complicated to explain in passing, pointers to literature would be welcome!
I'm beyond flattered to hear this, thank you so much for writing this! Your paper is so nicely done that it definitely warrants an interesting discussion!
SLDNF resolution really does not terminate in many cases, also if no variables at all are involved. The "a :- a." example which I mentioned is such a case: Speaking in terms of resolution, the resolvent of ¬a and (a ∨ ¬a) is again ¬a, and so it continues if we simply blindly try to derive ⊥ and do not check for saturation of derived clauses. This program is also not subject to occurs check (s.t.o), so occurs check does not solve the problem either even though it is in itself a good idea to use.
Memoization in the sense of "remembering what has already been derived" does not improve termination either in this case, since nothing has been derived yet. You can easily implement a simple form of memoization within Prolog, but SLG resolution goes well beyond that and can improve not only performance but in fact even termination characteristics, as the above example shows.
An interesting reference that may serve as a useful starting point to implement tabling is the paper Tabling as a Library with Delimited Control by Desouter et al.:
> SLDNF resolution really does not terminate in many cases, also if no variables at all are involved. The "a :- a." example which I mentioned is such a case [...]
I see! In a hurry, my mind substituted "A :- A.", but your example is indeed more interesting than that! Now I am curious as to how SLG resoultion deals with this case. This is definitely something I will look into at some point! At the moment I am at a stage where I am happy that my retract/1 implementation is respecting the logical view. :)
> I'm looking forward to seeing more papers and slides from you about these topics, and to discussing them here, keep up the nice work!
I am the author of the paper. First of all, thank you all for the valuable feedback!
I (re-)discovered PROLOG only a few months ago after having learned it superficially in the 90's. Then I wanted my own, hackable interpreter, and started to read books about PROLOG implementation. I found them to be too complex for my needs, so I finally sat down and asked myself "If I were a PROLOG interpreter, what would I do?" Then those six slides materialized in my head.
In the meantime, I have written an interpreter that is mostly compatible to DEC-10 PROLOG, have read Clocksin, Boizumault, and half of O'Keefe (and skimmed through a few others), and I am quite enjoying the tour so far! PROLOG is a fascinating language, and there are so many things to discover!
Also, there's a lot of work on compilers in Prolog, starting with "Logic programming and compiler writing" by David Warren (it's the basis of the last chapter of "Art of Prolog".)
I have read "Logic programming and compiler writing" with great interest! Did not know the Hitchhikers Guide, though. Looks interesting! Thanks for sharing!
If you go to the author's root page, you can see they've written many fascinating books on niche subjects that aren't so niche on HN. For example, implementation of a lisp system and array programming, scheme books, logic programming stuff... etc. I haven't bought any yet (sadly), but plan to in the future as they seem to be optimized for the self learner who wants to learn via discussion and numerous simple examples rather than chapters full of dense mathematical theory. The array programming one looks a lot better than the Dyalog book and some of the J books I own or have flipped through.
I am basically writing the books that I would have liked to read while studying CS. So, yes, they are books for self-learners, from a guy who prefers to learn from textbooks.
I also sometimes wish I had all the time in the world to read through each one of them. Sadly as someone working full time with a kid, I have very little time for exploring these fascinating tidbits. I think one of the things I should really work on even better is time management of my free time. A little less Reddit and a little more studying would be good for me :).
I was curious if the Klong book just covered an intro to array programming in Klong, or if it also covered how you implemented it in C or your high level C like language?
I'd like to write my own array language someday and would likely use something like "D" that is low level enough to be fast and release as a binary while being high level enough not to blow my mind.
> I was curious if the Klong book just covered an intro to array programming in Klong, or if it also covered how you implemented it in C or your high level C like language?
The Klong book is just an introduction to array programming. The Klong compiler is a rather simple recursive descent parser emitting ad-hoc abstract machine code and the execuction model is basically some glue between the functions implementing the verbs of the language. I did not think that its implementation is particularly interesting and therefore did not cover it in the book.
I hope you find the time for your studies and implementing your own array language!
Ah, regardless it sounds useful and a good way to get up to speed on array programming concepts.
One of the problems I've always had with niche languages is that they're so alien compared to my background with scripting languages (Ex: Lisp, APL, Prolog, Forth, Smalltalk, Haskell...etc) and thus harder to figure out how to idiomatically do something.
Btw, there is a neat book for Prolog (black cover) on programming text adventures in Prolog. I found it was the only thing that made it start making sense to me as in how I can start to think in Prolog like I do Python. You might enjoy it as well, but you may be past it by now.
I'll have to check out your Klong code at some point. It might be a nice start for me. I think I know how to pass things around, but I'm sure there are pitfalls to get passed.
Forth might be a good subject for you to cover at some point further in the future too.
ASP does not have operational semantics in that sense, or an order of execution. It's a set of stable models that satisfy the clauses.
Prolog with the SLD-resolution strategy (there are others but that's the presented one - even though it's not mentioned) does indeed induce an order on operations (which goal to prove first) and is therefore also useful for interactive programs. (Because of the order you can schedule side-effects)
In short: while Prolog is declarative, it does have imperative parts. ASP is purely declarative.
This means there is a lot of research in efficiently evaluating/compiling ASP programs while the evaluation strategy of Prolog is fixed with SLD resolution and therefore (in theory) only subject to minor improvements.
Edit:
Also ASP always terminates and is therefore not turing complete. SLD resolution can lead to infinite derivation chains. The Logic program a :- a does not terminate with SLD resolution but does both with Datalog and ASP.
While the syntax of Answer Set Programming is similar to Prolog, inference in ASP is closer to SAT solving, and ASP solvers are typically extensions of SAT solvers.
1. I don't see how this would ever fit on six slides. I mean, I see the slides but how useful/standalone are they, even if taken as basis for presentation of that paper?
2. The author conflates Prolog with SLD resolution. There are other resolution schemes in prolog systems. SLD might be the one defined for the iso-prolog standard but it's not the same. XSB and SWI support tabling and then use different schemes (SLG) and are still prolog. (Edit: see Triska's comment for more on this. It seems we posted within seconds of each other)
These slides seem to focus primarily, and in fact exclusively, on an important control mechanism called SLDNF resolution (SLD resolution with negation as failure):
https://en.wikipedia.org/wiki/SLD_resolution
This is the standard execution method of Prolog.
There are other ways to execute Prolog code too though, notably SLG resolution. SLG resolution is also called tabling, and it is provided by an increasing number of Prolog implementations and also in different variations. SLG resolution is a control strategy that terminates in many cases where SLDNF resolution does not terminate.
For example, the Prolog predicate
terminates (and fails) with SLG resolution, but not with SLDNF resolution.Note that Prolog is restricted to least Herbrand models. This is different from answer set programming (ASP), where you get all models. For example, the above program has two models in ASP (and in logic) namely one where a is true and one where a is false, but only one least Herbrand model (where a is false).
A common criticism of Prolog is that is "confined to depth first search", but that is not the case: You can reason about Prolog code in many different ways, and also execute it with iterative deepening, for example. This is in fact a major attraction of Prolog, not available in most other programming languages.
However, the more impure constructs you use, the more this flexibility shrinks. The ability to use different control mechanisms is a strong argument to use, as far as possible, the pure monotonic core of Prolog.