Hacker Newsnew | past | comments | ask | show | jobs | submit | iamdanieljohns's commentslogin

The guy behind Zuban should've put his project out the in open way earlier. I'd love to see both projects succeed, but in reality it should become one.

Zuban maybe doesn't succeed in terms of the amount of users, but it's nearly finished, supports the full Python type system (I'm in the process of completing the conformance tests), has support for Django and LSP support is also pretty much complete. So in a technical way it did already succeed.

It might not be used as much, but to be honest I think that's fine. I'm not a big VC-funded company and just hope to be able to serve the users it has. There's space for multiple tools in this area and it's probably good to have multiple type checkers in the Python world to avoid the typical VC rug pull.


Zuban continues to have "not great" diagnostics like the rest of the python type checkers, where ty has "rust inspired" diagnostics that are extremely helpful. It's a shame to hear that the current state is considered "nearly finished".

Have you tried `--pretty`? That is more of a Rust style. Most type checker report the short version, but have longer versions of the issues. IMO that's a good choice, but opinions might differ.

For real. I consider myself to be “into Python typing,” and yet I had no knowledge of Zuban before the parent comment and a very faint memory of Jedi.

Is Adaptive Reasoning gone from GPT-5.2? It was a big part of the release of 5.1 and Codex-Max. Really felt like the future.

Yes, GPT-5.2 still has adaptive reasoning - we just didn't call it out by name this time. Like 5.1 and codex-max, it should do a better job at answering quickly on easy queries and taking its time on harder queries.

Why have "light" or "low" thinking then? I've mentioned this before in other places, but there should only be "none," "standard," "extended," and maybe "heavy."

Extended and heavy are about raising the floor (~25% and ~45% or some other ratio respectively) not determining the ceiling.


Which model do you have?


Epix Gen2 in 51mm

I really use the hell out of it. Yeah I can't play solitaire like an iWatch, but the battery lasts 7 days in the backcountry, the flashlight is unbelievably handy while hiking/camping/boondocking, and it helps me be healthy with all of the data. Being able to trigger my inReach is also a nice touch. It's definitely a tool rather than a fashion piece.


Is the need for Oriole negated by using a system that separates storage from compute like Neon, Xata?


(Neon CEO)

Not really. OrioleDB solve the vacuum problem with the introduction of the undo log. Neon gives you scale out storage which is in a way orthogonal to OrielDB. With some work you can run OrioleDB AND neon storage and get benefits of both.


> OrioleDB solve the vacuum problem with the introduction of the undo log.

Way more than just this!

> With some work you can run OrioleDB AND neon storage and get benefits of both.

This would require significant design work, given that significant OrioleDB benefits are derived from row-level WAL.


Answering on behalf of Xata, it is orthogonal. I'm curious to try out Oriole on our platform when I get some time.


How does this compare to Supabase/Supavisor?


I don't think Supavisor actually does sharding.


Could you provide some links to relevant work/research on using a "scratchpad" that you liked?


I'm not much of an ML engineer but I can point you to the original chain of thought paper [0] and Anthropic's docs on how to enable their official thinking scratchpad [1].

[0] https://arxiv.org/pdf/2201.11903

[1] https://docs.anthropic.com/en/docs/build-with-claude/extende...


How does it compare to SurrealDB and ChromaDB?


We're working on precise benchmarks, but we are much faster than surreal is right now. Chroma is a standalone vector DB so harder to compare exactly, but for vectors we're on par with them for insertions and reads.

Again, working on benchmarks so will put them here when we're done :)


I recently wrote an article exploring how OpenAI's fear of public perception has shaped their decisions and branding. It breaks down OpenAI’s actions, especially on May 13, 2024, and how their naming decisions and hesitation reveal a deeper fear of public perception. It's not just a critique, but a lesson for all of us—to be bold and overcome fear.


I would love to see a comparison of the major PostgresQL services such as Citus, EDB, Crunchy, Neon, and some OSS distributions/packages


What do you think of Supavisor[0] ?

[0] https://supabase.com/blog/supavisor-1-million


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: