The guy behind Zuban should've put his project out the in open way earlier. I'd love to see both projects succeed, but in reality it should become one.
Zuban maybe doesn't succeed in terms of the amount of users, but it's nearly finished, supports the full Python type system (I'm in the process of completing the conformance tests), has support for Django and LSP support is also pretty much complete. So in a technical way it did already succeed.
It might not be used as much, but to be honest I think that's fine. I'm not a big VC-funded company and just hope to be able to serve the users it has. There's space for multiple tools in this area and it's probably good to have multiple type checkers in the Python world to avoid the typical VC rug pull.
Zuban continues to have "not great" diagnostics like the rest of the python type checkers, where ty has "rust inspired" diagnostics that are extremely helpful. It's a shame to hear that the current state is considered "nearly finished".
Have you tried `--pretty`? That is more of a Rust style. Most type checker report the short version, but have longer versions of the issues. IMO that's a good choice, but opinions might differ.
Yes, GPT-5.2 still has adaptive reasoning - we just didn't call it out by name this time. Like 5.1 and codex-max, it should do a better job at answering quickly on easy queries and taking its time on harder queries.
Why have "light" or "low" thinking then? I've mentioned this before in other places, but there should only be "none," "standard," "extended," and maybe "heavy."
Extended and heavy are about raising the floor (~25% and ~45% or some other ratio respectively) not determining the ceiling.
I really use the hell out of it. Yeah I can't play solitaire like an iWatch, but the battery lasts 7 days in the backcountry, the flashlight is unbelievably handy while hiking/camping/boondocking, and it helps me be healthy with all of the data. Being able to trigger my inReach is also a nice touch. It's definitely a tool rather than a fashion piece.
Not really. OrioleDB solve the vacuum problem with the introduction of the undo log. Neon gives you scale out storage which is in a way orthogonal to OrielDB. With some work you can run OrioleDB AND neon storage and get benefits of both.
I'm not much of an ML engineer but I can point you to the original chain of thought paper [0] and Anthropic's docs on how to enable their official thinking scratchpad [1].
We're working on precise benchmarks, but we are much faster than surreal is right now. Chroma is a standalone vector DB so harder to compare exactly, but for vectors we're on par with them for insertions and reads.
Again, working on benchmarks so will put them here when we're done :)
I recently wrote an article exploring how OpenAI's fear of public perception has shaped their decisions and branding. It breaks down OpenAI’s actions, especially on May 13, 2024, and how their naming decisions and hesitation reveal a deeper fear of public perception. It's not just a critique, but a lesson for all of us—to be bold and overcome fear.
reply