Controversial, but I found just going through the docs, attempting to write my own flake, throwing it at ChatGPT and asking it to review it and give me feedback was enough to get started.
I never ask it for code, just to review code - keeps me sharp and makes the iteration cycles faster.
ChatGPT is generally pretty bad at writing Nix code IME, and has a much greater tendency to hallucinate than when writing code in other languages.
But it's better at advice and review than writing code.
And if you want to ask it about Nixpkgs conventions or the behavior of some Nix codebase, it does much better if you dump the relevant source files in the chat for it to consume.
It's important with Nix to seriously understand what you're doing, so I think the way you've described is the most appropriate way to use an LLM with Nix. Use it to point you to docs, ask it questions about some code and then read the code closely, ask it which docs to read, ask it to point you to example repos on GitHub. If you use it this way it can actually be helpful.
If you have a longer time horizon, interacting with real community members is even better, and of course can also enrich the corpus for LLMs like ChatGPT. Discourse.nixos.org has a lot of smart and helpful people whose insight is more reliable than the advice of an LLM, so it's worth visiting as well.
Geometric Algebra supporters keep advertising that rotors are great since they work in any dimension, which makes me wonder: would an arbitrary n-dimensional SVD-like decomposition benefit from using rotors instead of rotation matrices, and if so how? And if not, why?
Yes, but from the canonical form of rotation matrices [1] I would expect such matrices to be represented as a sum of bi-vectors/rotors, which should take the same amount of data?
> and also faster than it takes me to verify the answer given by the machine.
I always thought there was a kind of NP-flavor to the problems for which LLMs-like AI are helpful in practice, in the sense that solving the problem may be hard but checking the solution must be fast.
Unless the domain can accomodate errors/hallucination, checking the solution (by a human) should be exponentially faster than finding it (by some AI) otherwise there's little practical gain.
Not exactly the same: `x` is given a polymorphic type (in F) in Haskell (restricted to values in ML) whereas the unannotated let-over-lambda will give `x`a monomorphic type.
There is a kind of "do notation" in OCaml with binding operators [1] (let*) for monads and (let+) for applicatives that is actually quite pleasant in practice.
Besides, if you still want to skip learning there are escape hatches like Rc<RefCell> but these hint pretty strongly (e.g. clones everywhere) that something might be wrong somewhere.
> Of course you have to internalise the rules of a borrow checker
This is generally a good thing: the more you internalise the logic of borrow checking, the earlier you start thinking about "who owns what" instead of deferring the choice to later, which often ends up in a tangled mess of "incidental data structures" as it is sometimes called in the c++ world [1].
Of course in c++ this means you have to internalise this discipline the hard way, i.e. without the borrow checker helping you.
I think this is because `mutable` qualifies the call operator of the lambda (like a reverse const qualifier) so by-value captures are effectively const during the call unless the lambda is marked `mutable`. References themselves are always const, but the referenced object may be modified through the reference depending on its constness even though the lambda is not `mutable`.
Is there a way to force capture by const-reference by the way?
> >Distances are canonically defined in the space of rotations
> I am sorry, but this is simply not true.
It is true, there is a canonical choice given by the bi-invariant Riemannian metric on compact Lie groups, such as rotations (in this case the shortest angle between rotations)
Whether or not you want this metric in practice is another problem, of course.
> The article we are discussing does not provide any means of "averaging" any more than two rotations,
The Karcher/Fréchet mean described in the original article does average more than two rotations