I took a look at a project I maintain[0], and wow. It's so wrong in every section I saw. The generated diagrams make no sense. The text sections take implementation details that don't matter and present them to the user like they need to know them. It's also outdated.
I hope actual users never see this. I dread thinking about having to go around to various LLM generated sites to correct documentation I never approved of to stop confusing users that are tricked into reading it.
I just tried it on several of my repos and I was rather impressed.
This is another one of those bizarre situations that keeps happening in AI coding related matters where people can look at the same thing and reach diametrically opposed conclusions. It's very peculiar and I've never experienced anything like it in my career until recently.
But you’re not looking at the same thing — you’re looking at two completely different sets of output.
Perhaps their project uses a more obscure language, has a more complex architecture, resembles another project that’s tripping up the interpretation of it. You have have excellent results without it being perfect for everything. Nothing is perfect and it’s important for people making these things to know how, right?
In my career I’ve never seen such aggressive dismissal of people’s negative experiences without even knowing if their use case is significantly different.
Which repos worked well? I've had the same experience as op- unhelpful diagrams and bad information hierarchy. But I'm curious to see examples of where it's produced good output!
> people can look at the same thing and reach diametrically opposed conclusions. It's very peculiar and I've never experienced anything like it in my career until recently
React vs other frameworks (or no framework). Object oriented vs functional. There's loads of examples of this that predate AI.
I dont think it's quite the same. The cases you mention are more like two alternative but roughly functionally equivalent things. People still argue and use both, but the argument is different. Even if people don't explicitly acknowledge it, at some level they understand it's a difference in taste.
This feels to me more like the horses vs cars thing, computers vs... something (no computers?), crypto vs "dollar-pegged" money, etc. It's deeper. I'm not saying the AI people are the "car" people, just that...there will be one opinion that will exist in 5-20 years, and the other will be gone. Which one... we'll see.
> People still argue and use both, but the argument is different
React vs no framework is at least in the same ballpark as AI vs no AI. Some people are determined to prove to the world that React/AI/functional programming solves everything. Some people are determined to prove the opposite. Most people just quietly use them without feeling like they need to prove anything.
I have a fairly large code base that has been developed over a decade that deepwiki has indexed. The results are mixed but how they are mixed gives me some insight into deepwiki's usefulness.
The code base has a lot of documentation in the form of many individual text files. Each describe some isolated aspect of the code in dense, info-rich and not entirely easily consumable (by humans) detail. As numerous as these docs are, the code has many more aspects that lack explicit documentation. And there is a general lack of high-level documentation that tie each isolated doc into some cohesive whole.
I formed a few conclusions about the deepwiki-generated content: First, it is really good where it regurgitates information from the code docs while being rather bad or simply missing for aspects not covered by the provided docs. Second, deepwiki is so-so for providing a high layer of documentation that sort of ties things together. Third, it is highly biased about the importance of various aspects by their code docs coverage.
The lessons I take from this are: deepwiki does better ingesting narrative than code. I can spend less effort on polishing individual documentation (not worrying about how easy it is for humans to absorb). I should instead spend that effort to fill in gaps, both details and to provide higher-level layers of narrative to unify the detailed documentation. I don't need to spend effort on making that unification explicit via sectioning, linking, ordering, etc as one may expect for a "manual" with a table of contents.
In short, I can interpret deepwiki's failings as identifying gaps that need filling by humans while leaning on deepwiki (or similar) to provide polish and some gap putty.
If documenting the why rather than the how you often end up tying high level concepts together.
E.g. If you describe how the user service exists you wont necessarily capture where it is used.
If you document why the user service exists you will often mention who or what needs it to exist, the thing that gives it a purpose. Do this throughout and everything ends up tied together at a higher level.
> The text sections take implementation details that don't matter and present them to the user like they need to know them. It's also outdated.
The point of the wiki is to help people learn the codebase so they can possibly contribute to the project, not for end users. It absolutely should explain implementation details. I do agree that it goes overboard with the diagrams. I’m curious, I’ve seen other moderately sized repo owners rave about how DeepWiki did very well in explaining implementation details. What specifically was it getting wrong about your code in your case? Is it just that it’s outdated?
I dunno, it seems to be real excited about a VS Code extension that doesn't exist and isn't mentioned in the actual documentation. There's just too many factual errors to list.
>I dunno, it seems to be real excited about a VS Code extension that doesn't exist and isn't mentioned in the actual documentation. There's just too many factual errors to list.
There is a folder for a VS Code extension here[0]. It seems to have a README with installation instructions. There is also an extension.ts file, which seems to me to be at least the initial prototype for the extension. Did you forget that you started implementing this?
In that folder is CHANGELOG.md[0] that indicates that this is unreleased. I'd say that including installation instructions for an unreleased version of the extension is exactly the issue that is being flagged.
You are going to want to reread the file you are quoting buddy. That changelog is indicative that the extension has been released. The Unreleased section seems to list features that are not yet included in the released version of the VS Code extension, and the future plans are features that have not been developed yet.
here the maintainer says it doesn't exist. there's basically no way another interpretation is "more correct". presence or files can be not intended for use, deprecated, internal, WIP, etc. this is why we need maintainers.
Maintainers are not gods, and don't get to rewrite plainly true facts. In the Changelog, it actually says it is a "Initial release of Codebook VS Code extension".
It’s funny, I accidentally put a link to the commit instead of the current repo file because I was investigating whether or not he committed it versus he recently took over the project and didn’t realize the previous owner had started one. But he is the one who actually committed the code. I guess LLMs are so good now that they’re stopping developers from hallucinating about code they themselves wrote.
Is it possible that a random person who discovered your repo from Google search would make the same mistake the LLM did and assume it works and not realize it was an unfinished experiment?
Yes, and so the value of the persons opinions on the repo is low. Far lower than real documentation written by someone who knows more, that would not have made that mistake.
The value proposition here is that these llm docs would be useful, however in this case they were not.
>Far lower than real documentation written by someone who knows more, that would not have made that mistake.
But his own documentation did said that there was a VSCode extension, with installation instructions, a README, changelog, etc. From what he said, it doesn't even compile or remotely work. It would be extremely aggravating to attempt to build the project with the maintainer's own documentation, spend an hour trying to figure out what's wrong, and then contact the maintainer for him to say, "oh yeah, that documentation not correct, that doesn't even compile even though I said it did 2 months ago lol." It is extremely ironic that he is so gungho about DeepWiki getting this wrong.
Yes, this is my point. It seems like the creator was a little bit lazy to create such a full fledged readme.md with so much polish but -entirely neglect to mention the whole thing is broken and unfinished-.
That seems about as annoying as a random wiki mis-explaining your system.
That being said, I am still biased towards empathizing with the library author since contributing to open source should be seen as being a great service already in and of itself, and I'd default to avoiding casting blame at an author for not doing things "perfectly" or whatever when they are already doing volunteer work/sharing code they could just keep private.
The WIP code was committed with the expectation that very few people would see it because it was not linked anywhere in the main readme. It's a calculated risk, so that the code wouldn't get out of date with main. The risk changed when their LLM (wrongly) decided to elevate it to users before it was ready.
It's clear DeepWiki is just a sales funnel for Devin, so all of this is being done in bad faith anyway. I don't expect them to care much.
>That being said, I am still biased towards empathizing with the library author since contributing to open source should be seen as being a great service already in and of itself, and I'd default to avoiding casting blame at an author for not doing things "perfectly" or whatever when they are already doing volunteer work/sharing code they could just keep private
This is true, and the only reason for this was more so his dismissive view of DeepWiki than a criticism of the project itself or of the author as a programmer. LLMs hallucinate all the time, but there is usually a method to the way they do so. Particularly, for it to just say a repo had a VSCode extension portion with nothing pointing to it would not be typical at all for an LLM like DeepWiki.
- Users are confused by autogenerated docs and don’t even want to try using a project because of it
- Real curated project documentation is no longer corrected by users feedback (because they never reach it)
- LLMs are trained on wrong autogenerated documentation: a downward spiral for hallucinations! (Maybe this one could then force users go look for the official docs? But not sure at this point…)
> LLMs are trained on wrong autogenerated documentation: a downward spiral for hallucinations! (Maybe this one could then force users go look for the official docs? But not sure at this point…)
I wonder what incentives for adherence to the use of this meta-tag might exist? For example, imagine I send you my digital resume and it has an AI-generated footer tag on display? Maybe a bad example- I like the idea of this in general, but my mind wanders to the fact that large entities completely ignored the wishes of robots.txt when collecting the internet's text for their training corpuses
Large entities aside, I would use this to mark my own generated content. Would be even more helpful if you could get the LLM to recognise it which would allow you to prevent ouroboros situations.
Also, no one is reading your resume anymore and big corps cannot be trusted with any rule as half of them think the next-word-machine is going to create God.
I went to the lodash docs and asked about how I'd use the 'pipeline' operator (which doesn't exist) and it correctly pointed out that pipeline isn't a thing, and suggested chain() for normal code and flow() for lodash fp instead. That's pretty much spot on. If I was guessing I'd suggest that the base model has a lot more lodash code examples in the training data, which probably makes a big difference to the quality of the output.
I guess I'm trying to emphasize the distinction between information in the repo (code) vs. information elsewhere (discussions) that the model looks at.
Not talking about this tool, but in general-incorrect LLM-generated documentation can have some value - developer knows they should write some docs, but are starring at a blank screen and not sure what to write so they don’t. Then developer runs an LLM, gets a screenful of LLM-generated docs, notices it is full of mistakes, starts correcting them-suddenly, a screenful of half-decent docs.
For this to actually work, you need to keep the quantity of generated docs a trickle rather than a flood-too many and the developer’s eyes glaze over and they miss stuff or just can’t be bothered. But a small trickle of errors to correct could actually be a decent motivator to build up better documentation over time.
There isnt a single AI out there that wont lie to your face, reinterpret your prompt, or just decide to ignore your prompt.
When they try to write a doc based off code, there is nothing you can do to prevent them from making up a load of nonsense and pretending it is thoroughly validated.
Do we have any reason to believe alignment will be solved any time soon?
Why should this be an issue? We are producing more and more correct training data and at some point the quality will be sufficient. To me its not clear what speaks against this.
We don’t expect 100% reliability from humans-humans will slack off, steal, defraud, harass each other, sell your source code to a foreign intelligence service, turn your business behind your back into a front for international drug cartels-some of that is very low probability, but never zero probability-so is it really a problem if we can’t reduce the probability to literally zero for AIs either?
You want the ai aligned with writing accurate documentation, not aligned with a goal thats near but wrong, e.g. writing accurate sounding documentation.
I tried it on a big OCaml project (https://deepwiki.com/libguestfs/virt-v2v) and it seems correct albeit very superficial. It helps that the project is extensively documented and the code well commented, because my feeling is that it's digesting those code comments along with the documentation to produce the diagrams. It seems decent as a starting point to understanding the shape of the project if I'd never seen it before. This is the sort of thing you could do yourself but it might take an hour or more, so having it done for you is a productivity gain.
Likewise, I tested this with a project we're using at work (https://deepwiki.com/openstack/kayobe-config) and at first it seems rather impressive until you realize the diagrams don't actually give any useful understanding of the system. Then, asking it questions, it gave useful seeming answers but which I knew were wholly incorrect. Worse than useless: disorienting and time-wasting.
I have bad news for you, this website has been appearing near the top of the search results for some time now. I consciously avoid clicking on it every time.
> The text sections take implementation details that don't matter and present them to the user like they need to know them.
Yeah this seems to be a recurring issue on each of the repos I've tried. Some occasionally useful tables or diagrams buried in pages of distracting irrelevant slop.
Does anybody find it funny that sci-fi movies have to heavily distort "robot voices" to make them sound "convincingly robotic"? A robotic, explicitly non-natural voice would be perfectly acceptable, and even desirable, in many situations. I don't expect a smart toaster to talk like a BBC host; it'd be enough is the speech if easy to recognize.
A robotic, explicitly non-natural voice would be perfectly acceptable, and even desirable, in many situations[...]it'd be enough is the speech if easy to recognize.
We've had formant synths for several decades, and they're perfectly understandable and require a tiny amount of computing power, but people tend not to want to listen to them:
The YouTube video [1] was published in 2019. The Blog spam posts range from Nov 2022 to July 2023.
Other than the video, the only relevant content is on the about page [2]. It says the voice is a collaboration between 5 different entities, including advocacy groups, marketing firms and a music producer.
The video is the only example of the voice in use. There is no API, weights, SDK, etc.
I suspect this was a one-off marketing stunt sponsored by Copenhagen pride before the pandemic. The initial reaction was strong enough that a couple years they were still getting a small but steady flow of traffic. One of the involved marketing firms decided to monetize the asset and defaced it with blog spam.
Huh. Sounds perfectly intelligible and definitively artificial. Feels weakly feminine to me, but only because I was primed to think about gender from the branding.
It’s a good choice for a robot voice. It’s easier to understand than the formant synths or deliberately distorted human voices. The genderless aspect is alien enough to avoid the uncanny valley. You intuitively know you’re dealing with something a little different.
In the Culture novels, Iain Banks imagines that we would become uncomfortable with the uncanny realism of transmitted voices / holograms, and intentionally include some level of distortion to indicate you're speaking to an image
Depends on the movie. Ash and Bishop in the Alien franchise sound human until there's a dramatic reason to sound more 'robotic'.
I agree with your wider point. I use Google TTS with Moon+Reader all the time (I tried audio books read by real humans but I prefer the consistency of TTS)
Slightly different there because it's important in both cases that Ripley (and we) can't tell they're androids until it's explicitly uncovered. The whole point is that they're not presented as artificial. Same in Blade Runner: "more human than human". You don't have a film without the ambiguity there.
I remember that the novelization of the fifth element describes that the cops are taught to speak as robotic as possible when using speakers for some reason. Always found the idea weird that someone would _want_ that
I got an error when I tried the demo with 6 sentences, but it worked great when I reduced the text to 3 sentences. Is the length limit due to the model or just a limitation for the demo?
"This first Book proposes, first in brief, the whole Subject, Mans disobedience, and the loss thereupon of Paradise wherein he was plac't: Then touches the prime cause of his fall, the Serpent, or rather Satan in the Serpent; who revolting from God, and drawing to his side many Legions of Angels, was by the command of God driven out of Heaven with all his Crew into the great Deep."
It takes a while until it starts generating sound on my i7 cores but it kind of works.
This also works:
"blah. bleh. blih. bloh. blyh. bluh."
So I don't think it's a limit on punctuation. Voice quality is quite bad though, not as far from the old school C64 SAM (https://discordier.github.io/sam/) of the eighties as I expected.
I tried to replicate their demo text but it doesn't sound as good for some reason.
If anyone else wants to try:
> Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. Our smallest model is less than 25 megabytes.
> Error generating speech: failed to call OrtRun(). ERROR_CODE: 2, ERROR_MESSAGE: Non-zero status code returned while running Expand node. Name:'/bert/Expand' Status Message: invalid expand shape
Thanks, I was looking for that. While the reddit demo sounds ok, even though on a level we reached a couple of years ago, all TTS samples I tried were barley understandable at all
On PC it's a python dependency hell but someone managed to package it in self contained JS code that works offline once it loaded the model? How is that done?
ONNXRuntime makes it fairly easy, you just need to provide a path to the ONNX file, give it inputs in the correct format, and use the outputs. The ONNXRuntime library handles the rest. You can see this in the main.js file: https://github.com/clowerweb/kitten-tts-web-demo/blob/main/m...
Plus, Python software are dependency hell in general, while webpages have to be self-contained by their nature (thank god we no longer have Silverlight and Java applets...)
yeah, this is just a preview model from an early checkpoint. the full model release will be next week which includes a 15M model and an 80M model, both of which will have much higher quality than this preview.
Not open source. "You will need internet connectivity to validate your AccessKey with Picovoice license servers ... If you wish to increase your limits, you can purchase a subscription plan." https://github.com/Picovoice/orca#accesskey
Going online is a dealbreaker but if you really need it you could use ghidra to fix that. I had tried to find a conversion of their model to onnx (making their proprietary pipeline useless) but failed.
Hopefully open source will render them irrelevant in the future.
Does an apk for Android exist for replacing its speech to text engine? I tried sherpa-onnx but it was too slow for real time usage it seemed, and especially so for audiobooks when sped up.
I can't test this out right now, is this just a demo or is it actually an apk for replacing the engine? Because those are two different things, the latter can be used any time you want to read something aloud on the page for example. This is the sherpa-onnx one I'm talking about.
I get the feeling we're going to end up in a place where we don't make docs any more. A project will have a trusted agent that can see the actual code, maybe just the API surface, and that agent acts like a customer service rep to a user's agent. It will generate docs on the fly, with specific examples for the task needed. Maybe the agents will find bugs together and update the code too.
Not exactly where I'd like to see us go, but at least we'll never get outdated information.
There are lots of things that neither the code nor the docs cover, so I suspect that's not quite possible, yet.
For example, if you're deploying a Postgres proxy, it will have a TCP timeout setting that you can tweak. Neither the docs nor the code will tell you what the value should be set to though.
Your engineers might know, because they have seen your internal network fail dozens of times and have a good intuition about it.
Software complexity has a wide range. If you're thinking of simple things like Sendgrid, Twilio or Stripe APIs, sure, an agent can easily write some boilerplate. But I think in certain sectors, we would need to attach some more inputs to the model that we currently don't have to get it to a good spot.
The Rust ecosystem needs more high-level frameworks like this. However, I've been shipping Django since 0.96, and I don't think Cot really addresses the main issues Django currently has. Performance isn't in the top 5.
Django's biggest issue is their aging templating system. The `block`, `extend` and `include` style of composition is so limited when compared to the expressiveness of JSX. There are many libraries that try to solve Django's lack of composition/components, but it's all a band-aid. Today, making a relatively complex page with reusable components is fragile and verbose.
The second-biggest issue is lack of front end integration. Even just a blessed way of generating an OpenAPI file from models would go a long way. Django Ninja is a great peek at what that could look like. However, new JS frameworks go so much further.
The other big issue Django has _is_ solved by Cot (or Rust), which is cool (but not highlighted): complicated deployments. Shipping a bunch of Python files is painful. Also, Python's threading model means you really have to have Gunicorn (and usually Nginx) in front. Cot could have all that compiled into one binary.
About performance: I agree, and I'm not even trying to make performance a priority in Cot. I mean, of course, it's nice to have an actual compiled language, but I think a bigger perk in using Rust is having *a lot* of stuff checked in compile time, rather than in runtime. This is something I'm trying to make the main perk of, and it is reflected in multiple parts in Cot (templates checked at compile time, ORM that is fully aware of database schema at compile time, among many others).
About JSX: I think that's the one I'll need to explore further. In my defense, the templating system Cot currently uses (Rinja) is much more expressive and pleasant to use than Django's, but admittedly, the core concepts are very similar. This one might be difficult to address because of an ecosystem of templating engines that is pretty lacking in Rust, but I'll see what I can do to help this.
About front-end integration: that's something that will be (at least partially) addressed no later than v0.2. Django REST Framework is a pain (mostly because it's never been integrated in Django), Django Ninja is something I haven't personally used very much - good to have it mentioned so it can be a source of inspiration. Generating OpenAPI docs is something that's even mentioned in the article "Request Handler API is far from being ergonomic and there’s no automatic OpenAPI docs generation" so yeah, I'm aware of this.
Deployment is indeed something that's super nice – and a part of this is that newer Rust versions generally don't break compatibility with existing code, unlike Python. I agree this should be highlighted, thanks for suggestion!
You could potentially address both templating and front-end integration by adopting Dioxus which does full stack rendering with React-like components (but in Rust). A "batteries included" full-stack framework could be quite exciting I think.
There is another solution, in this specific case. If all they wanted is to start returning the test results before all the tests are done, a streaming http response can be used.
In Bottle, returning a generator or iterator will send the response in chucks, instead of all at once. The effect would be that the test results load in one by one, providing the user with feedback. No JavaScript needed.
Dart is crazy because it runs on every platform, compiles to native, has real parallelism via isolates, native async, and native type safety.
There's not really a backend that takes advantage of all that. In theory, one server binary could handle REST, web sockets, background workers, and have generated type safe client packages for every platform. Dart also has a great Rust ffi story. It would be great to see that leveraged.
ServerPod is a great start, but it's really Flutter focused. The web apis feel like second class.
Additionally, database management isn't a solved problem yet. ServerPod uses yaml to define models, and the other main option is just a Prisma wrapper. Dart needs something like Drizzle.
You could state the same thing as your first sentence for e.g. Rust or many other languages, I personally only see Dart being useful if you already have a Flutter app and you don't want to learn another language, and to have shared types easily, similar to fullstack web devs using TypeScript for their React and Node apps.
I personally use Rust backends and Flutter frontends for my apps. I'd use pure Rust for the entire thing but Rust frontends are nowhere near the capabilities and maturity as Flutter, but I use FFI like flutter_rust_bridge and rinf at least, as you mention.
I actually can't think of another language that has all of that built in. Rust doesn't, it needs a run time for async. JavaScript doesn't, it needs typescript and it doesn't compile to native.
That's true about Rust but that's a feature not a bug as you can swap out async runtime if needed and if you do add it, it is still as or more efficient than Dart.
Kotlin Native is a toy for JetBrains to eat some of that Apple pie and capture teams that want to share logic between their mobile codebases.
Kotlin Native has no std, they cut down platform support with K2, performance and compilation speed are atrocious and there are no plans to improve any of that short term.
Kotlin without JVM can’t hold a candle to Dart. Which is a real shame for Dart, because Dart has improved dramatically last couple years while Kotlin has not introduced anything major last 5 years since release of coroutines.
Their K2 compiler, that was supposed to promise major compilation speed improvements, was mostly a flop and we are yet to see if they’ll do anything good with it. Context receivers are not even close, pattern matching is not even on a roadmap and they’re refusing to consider union types. Kotlin lives on a borrowed time.
1. runs on every platform (KNative runs natively on Linux, Mac, Windows, Android, iOS. It can also run under the JVM non-natively, and anywhere Javascript runs non-natively. The native code can build for a variety of architectures including ARM and x86)
2. compiles to native (As above, compiles to native on Linux/Mac/Windows/Android/iOS)
3. has real parallelism via isolates (Kotlin can spawn and interact with full processes, OS threads, and/or green threads in any admixture)
4. native async (Kotlin has native async/await support via coroutines, which work under KNative)
5. native type safety (Kotlin has a strong static type system which is available for native code as well and encompasses native types interactive with Kotlin code in either direction)
I don't think anything you said pertains to the listed five features. Especially complaining about compile speed is a strange thing to be doing in the context of this conversation.
On the topic of databases, I think https://drift.simonbinder.eu/ might interest you. I've been using it in a Flutter app with SQLite, but my understanding is that you could use it on the server too. I recall them having support for at least SQLite and Postgres.
In Rust, there's a controversial practice around putting unit tests in the same file as the actual code. I was put off by it at first, but I'm finding LLM autocomplete is able to be much more effective just being able to see the tests.
If the LLM can't complete a task, you add a test the shows it how to do it. This is multishot incontext learning and programming by example.
As for real TDD, you start with the tests and code until they pass. I haven't used an LLM to do this in Rust yet, but in Python due its dynamic nature, it is much simpler.
You can write the tests, then have the LLM sketch the code out enough so that they pass or at least exist enough to pass a linter. Dev tools are going to feel like magic 18 months from now.
The benefit of this approach is that you can directly test any function in the same scope without altering its visibility: it implicitly encourages you to test all functions (and design functions in a way they can be tested, as you are writing tests as you write code), not just those part of the public api contract.
Plus you can update tests, code, and comments in one go, with visibility into them at all times.
I agree with you on Django Ninja, so refreshingly simple compared to DRF. I think Django core needs to adopt something like it.
However, Vite is pretty complicated. I prefer just esbuild if I don't need all the extra features of Vite, which is usually true with Django. I wrote a post[0] with an example repo[1] if anyone wants to see how everything wires up.
With Solidjs, the minimum JS payload is around 9kb, and you get access to the whole JS ecosystem if you want it.
> I agree with you on Django Ninja, so refreshingly simple compared to DRF. I think Django core needs to adopt something like it.
I was going to ask about this with respect to DRF, but you answered it. I am re-learning Django after having been away from it and Python for ~4 years now, and my previous experience was with DRF in a somewhat toxic group so I had less than ideal feelings about it. I know PTSD is a real thing and I don't mean to sound glib about it, but I think I actually had the beginnings of it from that experience.
This is great, thank you for sharing! The QR code generator alone sold me on getting it. So many online generators demand I make an account for some reason.
It would be amazing if this were extendable with plugins though. I have a ton of custom terminal scripts for my workflows, but some of them would just be better with a simple UI. Global hotkeys that take me right to the tool would be awesome too.
Edit: it looks like global hotkeys can be done with the URL Scheme feature and Raycast. Nice.
Do you need fancy looking ones or just barebones QR codes? Because the latter you can just get from the qrcode Python package and simply go "qr news.ycombinator.com > hn.png" in your terminal.
I hope actual users never see this. I dread thinking about having to go around to various LLM generated sites to correct documentation I never approved of to stop confusing users that are tricked into reading it.
[0]: https://deepwiki.com/blopker/codebook