This is a great summary, although I think I'm more bearish on SBOMs than Andrew is: my experience integrating them so far (in both pip-audit and uv) has been that there's much more malleability at the representation level than the presence of a standard might imply, and that consumers have adapted (a la Postel) to this reality by being very permissive with the kinds of broken stuff they permit when ingesting third-party SBOMs.
(Case in point: pip-audit's CycloneDX emission was subtly incorrect for years, and nobody noticed[1].)
So am I right to take away from this that the blog is complaining about Bear Blog (about which I know nothing, is it just hosting, a framework, or does it have a front page like HN somewhere?) hosting too many posts that are optimized for HN? It’s not a complain about HN itself, which by implication cannot be saved?
Edit: as pointed out in another post, they actually do have an aggregation page which is pretty cool: https://bearblog.dev/discover/
TBD how correlated it is with HN
I agree, although I’m guessing they’re measuring “most popular” as in “most beloved” and not as in “most used.” That’s the metric that StackOverflow puts out each year.
I’ve lived here my entire life, and there’s a significant difference between normal “aggressive” driving and many of the driving patterns that have emerged post-COVID. For example: blocking the box is (unfortunately) somewhat normal, while running through red lights and making illegal turns has (anecdotally) increased significantly.
One of the biggest problems with GitHub Actions is that, even with fully transitive locking at the action layer, you still can’t really guarantee hermetic execution: lots of actions do implicit version resolution on tools by default. For example, setup-python, etc. will select a reasonable version based on the constraints you give it, which may end up being the pre-installer runner’s Python, or a newly released patch version, etc.
Fully pinning action references themselves is a step in the right direction, but the ecosystem as a whole probably has expectations that are misaligned with reproducibility/hermeticity, and those expectations will be challenging to overcome.
"Cause" seems unsubstantiated: I think to justify "cause," we'd need strong evidence that the equivalent bug (or worse) wouldn't have happened in C.
Or another way to put it: clearly this is bad, and unsafe blocks deserve significant scrutiny. But it's unclear how this would have been made better by the code being entirely unsafe, rather than a particular source of unsafety being incorrect.
But it didn't promise to be the solution either. Rust has never claimed, nor have its advocates claimed, that unsafe Rust can eliminate memory bugs. Safe Rust can do that (assuming any unsafe code relied upon is sound), but unsafe cannot be and has never promised to be bug free.
Except that it didn't fail to be the solution: the bug is localized to an explicit escape hatch in Rust's safety rules, rather than being a latent property of the system.
(I think the underlying philosophical disagreement here is this: I think software is always going to have bugs, and that Rust can't - and doesn't promise - to perfectly eliminate them. Instead, what Rust does promise - and deliver on - is that the entire class of memory safety bugs can be eliminated by construction in safe Rust, and localized when present to errors in unsafe Rust. Insofar as that's the promise, Rust has delivered here.)
You can label something an "explicit escape hatch" or a "latent property of the system", but in the end such labels are irrelevant. While I agree that it may be easier to review unsafe blocks in Rust compared to reviewing pointer arithmetic, union accesses, and free in C because "unsafe" is a bit more obvious in the source, I think selling this as a game changer was always an exaggeration.
Having written lots of C and C++ before Rust, this kind of local reasoning + correctness by construction is absolutely a game changer. It's just not a silver bullet, and efforts to miscast Rust as incorrectly claiming to be one seem heavy-handed.
Google's feedback seems to suggest Rust actually might be a silver bullet, in the specific sense meant in the "No Silver Bullet" essay.
That essay doesn't say that silver bullets are a panacea or cure all, instead they're a decimal order of magnitude improvement. The essay gives the example of Structured Programming, an idea which feels so obvious to us today that it's unspoken, but it's really true that once upon a time people wrote unstructured programs (today the only "language" where you even could do this is assembly and nobody does it) where you just jump arbitrarily to unrelated code and resume execution. The result is fucking chaos and languages where you never do that delivered a huge improvement even before I wrote my first line of code in the 1980s.
Google did find that sort of effect in Rust over C++.
This is sort of the exact opposite of reality: the point of safe Rust is that it's safe so long as Rust's invariants are preserved, which all other safe Rust preserves by construction. So you only need to audit unsafe Rust code to ensure the safety of a Rust codebase.
(The nuance being that sometimes there's a lot of unsafe Rust, because some domains - like kernel programming - necessitate it. But this is still a better state of affairs than having no code be correct by construction, which is the reality with C.)
I've written lots of `forbid(unsafe_code)` in Rust; it depends on where in the stack you are and what you're doing.
But as the adjacent commenter notes: having unsafe is not inherently a problem. You need unsafe Rust to interact with C and C++, because they're not safe by construction. This is a good thing!
I think unsafe Rust is harder to write than C. However, that's because unsafe Rust makes you think about the invariants that you'd need to preserve in a correct C program, so it's no harder to write than correct C.
In other words: unsafe Rust is harder, but only in an apples-and-oranges sense. If you compare it to the same diligence you'd need to exercise in writing safer C, it would be about the same.
Safe Rust has more strict aliasing requirements than C, so to write sound unsafe Rust that interoperates with safe Rust you need to do more work than the equivalent C code would involve. But per above, this is the apples-and-oranges comparison: the equivalent C code will compile, but is statistically more likely to be incorrect. Moreover, it's going to be incorrect in a way that isn't localizable.
Ultimately every program depends on things beyond any compilers ability to verify, for example the calls to code not written in that language being correct, or even more fundamentally if you're writing some embedded program that literally has interfaces to foreign code at all the silicon (both that handles IO and that which does the computation) being correct.
The promise of rust isn't that it can make this fundamentally non-compiler-verifiable (i.e. unsafe) dependency go away, it's that you can wrap the dependency in abstractions that make it safe for users of the dependency if the dependency is written correctly.
In most domains rust don't necessitate writing new unsafe code, you rely on the existing unsafe code in your dependencies that is shared, battle tested, and reasonably scoped. This is all rust, or any programming langauge, can promise. The demand that the dependency tree has no unsafe isn't the same as the domain necessitating no unsafe, it's the impossible demand that the domain of writing the low level abstractions that every domain relies on doesn't need unsafe.
Almost all of them. It would be far shorter to list the domains which require unsafe. If you're seeing programmers reach for unsafe in most projects, either you're looking at a lot of low level hardware stuff (which does require unsafe more often than not), or you are seeing cases where unsafe wasn't required but the programmer chose to use it anyway.
Ultimately all software has to touch hardware somewhere. There is no way to verify that the hardware always does what it is supposed to be because reality is not a computer. At the bottom of every dependency tree in any Rust code there always has to be unsafe code. But because Rust is the way it is those interfaces are the only places you need to check for incorrectly written code. Everywhere else that is just using safe code is automatically correct as long as the unsafe code was correct.
And that is fine, because those upstream deps can locally ensure that those sections are correct without any risk that some unrelated code might mis-safely use them unsafely. There is an actual rigorous mathematical proof of this. You have no such guarantees in C/C++.
> And a bug in one crate can cause UB in another crate if that other crate is not designed well and correctly.
Yes! Failure to uphold invariants of the underlying abstract model in an unsafe block breaks the surrounding code, including other crates! That's exactly consistent with what I said. There's nothing special about the stdlib. Like all software, it can have bugs.
What the proof states is that two independently correct blocks of unsafe code cannot, when used together, be incorrect. So the key value there is that you only have to reason about them in isolation, which is not true for C.
I think you're misunderstanding GP. The claim is that the only party responsible for ensuring correctness is the one providing a safe API to unsafe functionality (the upstream dependency in GP's comment). There's no claim that upstream devs are infalliable nor that the consequences of a mistake are necessarily bounded.
Those guys were writing a lot of unsafe rust and bumped into UB.
I sound like an apologist, but the Rust team stated that “memory safety is preserved as long as Rusts invariants are”. Feels really clear, people keep missing this point for some reason, almost as if its a gotcha that unsafe rust behaves in the same memory unsafe way as C/C++: when thats exactly the point.
Your verification surface is smaller and has a boundary.
> Any large Rust project I check has tons of unsafe in its dependency tree.
This is an argument against encapsulation. All Rust code eventually executes `unsafe` code, because all Rust code eventually interacts with hardware/OS/C-libraries. This is true of all languages. `unsafe` is part of the point of Rust.
And all of it is eventually run on an inherently unsafe CPU.
I cannot understand why we are continuing to have to re-litigate the very simple fact that small, bounded areas of potential unsafety are less risky and difficult to audit than all lines of code being unsafe.
It's just moving the goalposts. "If it compiles it works" to "it eliminates all memory bugs" to "well, it's safer than c...".
If Rust doesn't live up to its lofty promises, then it changes the cost-benefit analysis. You might give up almost anything to eliminate all bugs, a lot to eliminate all memory bugs, but what would you give up to eliminate some bugs?
Can you show me an example of Rust promising "if it compiles it works"? This seems like an unrealistic thing to believe, and I've never heard anybody working on or in Rust claim that this is something you can just provide with absolute confidence.
The cost-benefit argument for Rust has always been mediated by the fact that Rust will need to interact with (or include) unsafe code in some domains. Per above, that's an explicit goal of Rust: to provide sound abstractions over unsound primitives that can be used soundly by construction.
> Can you show me an example of Rust promising "if it compiles it works"? [...] and I've never heard anybody working on or in Rust claim that this is something you can just provide with absolute confidence.
I have heard it and I've stated it before. It's never stated in absolute confidence. As I said in another thread, if it was actually true, then Rust wouldn't need an integrated unit testing framework.
It's referring to the experience that Rust learners have, especially when writing relatively simple code, that's it tends to be hard to misuse libraries in a way that looks correct and compiles but actually fails at runtime. Rust cannot actually provide this guarantee, it's impossible in any language. However there are a lot of common simple tasks (where there's not much complex internal logic that could be subtly incorrect) where the interfaces provided by libraries they're depending on are designed to leverage the type system such that it's difficult to accidentally misuse them.
Like something like not initializing a HTTP client properly. The interfaces make it impossible to obtain an improperly initialized client instance. This is an especially distinct feeling if you're used to dynamic languages where you often have no assurances at all that you didn't typo a field name.
I've seen (and said) "if it compiles it works," but only when preceded by softening statements like "In my experience," or "most of the time." Because it really does feel like most of the time, the first time your program compiles, it works exactly the way you meant it to.
I can't imagine anybody seriously making that claim as a property of the language.
Yeah, I think the experiential claim is reasonable. It's certainly my experience that Rust code that compiles is more confidence-inspiring than Python code that syntax-checks!
6 days ago: Their experience with Rust was positive for all the commonly cited reasons - if it compiles it works
8 days ago: I have to debug Rust code waaaay less than C, for two reasons: (2) Stronger type system - you get an "if it compiles it works" kind of experience
4 months ago: I've been writing Rust code for a while and generally if it compiles, it works.
5 months ago: If it’s Rust, I can just do stuff and I’ve never broken anything. Unit tests of business logic are all the QA I need. Other than that, if it compiles it works.
9 months ago: But even on a basic level Rust has that "if it compiles it works" experience which Go definitely doesn't.
Some people claim that the quote is hyperbolic because it only covers memory errors. But this bug is a memory error, so ...
GP isn't asking for examples of just anyone making that statement. They're asking for examples of Rust making that promise. Something from the docs or the like.
> Some people claim that the quote is hyperbolic because it only covers memory errors. But this bug is a memory error, so ...
It's a memory error involving unsafe code, so it would be out of scope for whatever promises Rust may or may not have made anyways.
I think it's pretty reasonable to interpret "Language X promises Y" as tantamount to said promise appearing in Language X's definition and/or docs. Claims from devs in their official capacities are likely to count as well.
On the other hand, what effectively random third parties say doesn't matter all that much IMO when it comes to these things because what they think a language promises has little to no bearing on what the language actually promises. If I find a bunch of randos claiming Rust promises to give me a unicorn for my birthday it seems rather nonsensical to turn around and criticize Rust for not actually giving me a unicorn in my birthday.
What docs or language definition are you even talking about? Rust doesn't have an official spec. How can Rust make such a promise if it does not even have an official spec or documentation where it can make it?
As the other commenter said, said promises are made by people. The problem comes from the fact that these people are not always just random internet strangers who don't know a thing about programming who say random stuff that crosses their mind. Sometimes, it comes from the authority of the Rust compiler developers themselves (who apparently also don't seem to know anything about programming, considering that they have made such retarded claims...).
Just look at any talk given by them, or any post made on any forum, or, most importantly, the many instances where such statements are made on the Rust book (which is not an official spec for the language, but it is the closest thing we have, ignoring Ferrocene's spec because rustc is not based on that...).
Also most public speakers who were extremely vocal about Rust and made cookie cutter and easy to digest content for beginner programmers were dead set on selling the language through empty promises and some weird glorification of its capabilities, bordering the behaviour of a cult. Cue in, No Boilerplate, Let's Get Rusty, etc... all of these people have said many times the "if it compiles, you know it works!" statement, which is very popular among Rust programmers, and we all know that that is not true, because anyone with any experience with Rust will be able to tell you that with unsafe Rust, you can shoot yourself in the foot.
Stop selling smoke, this is a programming language, why must it also be a cult?
> What docs or language definition are you even talking about?
I thought it was pretty clear from context that I was speaking more generally there. Suppose not.
> As the other commenter said, said promises are made by people.
Sure, but when the developers of a language collectively agree that the language they all work on should make a particular promise, I think it's reasonable to condense that to <language> promises <X> rather than writing everything out over and over.
It's kind of similar to how one might say "Company X promises Y" rather than "The management of company X promises Y". It's a convenient shorthand that I think is reasonably understood by most people.
> Rust doesn't have an official spec. How can Rust make such a promise if it does not even have an official spec or documentation where it can make it?
Rust does have official documentation [0]?
And that being said, I don't think a language needs an official spec to make a promise. As far as most programmers are concerned, I'm pretty sure the promises made in language/implementation docs are good enough. K&R was generally good enough for most C programmers before C had a spec, after all (and arguably even after to some extent) :P
> Sometimes, it comes from the authority of the Rust compiler developers themselves []. Just look at any talk given by them, or any post made on any forum
Does it? At least from what I can remember off the top of my head I don't think I've seen such claim from official Rust devs speaking in their capacity as such. Perhaps you might have links to such?
> or, most importantly, the many instances where such statements are made on the Rust book
Are there "many instances"? `rg 'compiles.*it.*works` turns up precisely one (1) instance of that statement in the Rust book [1] and slight variations on that regex don't turn up any additional instances. What the book says portrays that statement in a slightly different light than you seem to think:
> Note: A saying you might hear about languages with strict compilers, such as Haskell and Rust, is “If the code compiles, it works.” But this saying is not universally true. Our project compiles, but it does absolutely nothing! If we were building a real, complete project, this would be a good time to start writing unit tests to check that the code compiles and has the behavior we want.
I wouldn't claim that my search was comprehensive, though, and I also can't claim to know the Rust Book from cover to cover. Maybe you know some spots I missed?
> which is not an official spec for the language, but it is the closest thing we have
I believe that particular honor actually goes to the Rust Reference [2].
> Also most public speakers who were extremely vocal about Rust and made cookie cutter and easy to digest content for beginner programmers were dead set on selling the language through empty promises and some weird glorification of its capabilities, bordering the behaviour of a cult. Cue in, No Boilerplate, Let's Get Rusty, etc... all of these people have said many times the "if it compiles, you know it works!" statement, which is very popular among Rust programmers, and we all know that that is not true, because anyone with any experience with Rust will be able to tell you that with unsafe Rust, you can shoot yourself in the foot.
Again, I don't think what unrelated third parties say has any bearing on what a language actually promises. Rust doesn't owe me a unicorn on my birthday no matter how many enthusiastic public speakers I find.
I've also said it, with the implication that the only remaining bugs are likely to be ones in my own logic. Like, suppose I'm writing a budget app and haven't gone to the lengths of making Debit and Credit their own types. I can still accidentally subtract a debit from a balance instead of adding to it. But unless I've gone out of my way to work around Rust's protections, e.g. with unsafe, I know that parts of my code aren't randomly mutating immutables, or opening up subtle use-after-free situations, etc. Now I can spend all my time concentrating on the program's logic instead of tracking those other thousands of gotchas.
It's not moving the goalposts at all. I'm not a Rust programmer, but for years the message has been the same. It's been monotonous and tiring, so I don't know why you think it's new.
Safe Rust code is safe. You know where unsafe code is, because it's marked as unsafe. Yes, you will need some unsafe code in an notable project, but at least you know where it is. If you don't babysit your unsafe code, you get bad things. Someone didn't do the right thing here and I'm sure there will be a post-mortem and lessons learned.
To be comparable, imagine in C you had to mark potentially UB code with ub{} to compile. Until you get that, Rust is still a clear leader.
If there are specific incompatibilities or rough edges you're running into, we're always interested in hearing about them. We try pretty hard to provide a pip compatibility layer[1], but Python packaging is non-trivial and has a lot of layers and caveats.
Is there any plan for a non-“compatibility layer” way to do anything manual or nontrivial? uv sync and uv run are sort of fine for developing a distribution/package, but they’re not exactly replacements for anything else one might want to do with the pip and venv commands.
As a very basic example I ran into last week, Python tooling, even the nice Astral tooling, seems to be almost completely lacking any good detection of what source changes need to trigger what rebuild steps. Unless I’ve missed something, if I make a change to a source tree that uv sync doesn’t notice, I’m stuck with uv pip install -e ., which is a wee bit disappointing and feels a bit gross. I suppose I could try to put something correct into cache-keys, but this is fundamentally wrong. The list of files in my source tree that need to trigger a refresh is something that my build system determines when it builds. Maybe there should be a way to either plumb that into uv’s cache or to tell uv that at least “uv sync” should run the designated command to (incrementally) rebuild my source tree?
(Not that I can blame uv for failing to magically exfiltrate metadata from the black box that is hatchling plus its plugins.)
> Is there any plan for a non-“compatibility layer” way to do anything manual or nontrivial?
It's really helpful to have examples for this, like the one you provide below (which I'll respond to!). I've been a maintainer and contributor to the PyPA standard tooling for years, and once uv "clicked" for me I didn't find myself having to leave the imperative layer (of uv add/sync/etc) at all.
> As a very basic example I ran into last week, Python tooling, even the nice Astral tooling, seems to be almost completely lacking any good detection of what source changes need to trigger what rebuild steps.
Could you say more about your setup here? By "rebuild steps" I'm inferring you mean an editable install (versus a sdist/bdist build) -- in general `uv sync` should work in that scenario, including for non-trivial things where e.g. an extension build has to be re-run. In other words, if you do `uv sync` instead of `uv pip install -e .`, that should generally work.
However, to take a step back from that: IMO the nicer way to use uv is to not run `uv sync` that much. Instead, you can generally use `uv run ...` to auto-sync and run your development tooling within an environment than includes your editable installation.
By way of example, here's what I would traditionally do:
python -m venv .env
source .env/bin/activate
python -m pip install -e .[dev] # editable install with the 'dev' extra
pytest ...
# re-run install if there are things a normal editable install can't transparently sync, like extension builds
Whereas with uv:
uv run --dev pytest ... # uses pytest from the 'dev' dependency group
That single command does everything pip and venv would normally do to prep an editable environment and run pytest. It also works across re-runs, since it'll run `uv sync` as needed under the hood.
My setup is a mixed C/C++/Python project. The C and C++ code builds independently of the Python code (using waf, but I think this barely matters -- the point is that the C/C++ build is triggered by a straightforward command and that it rebuilds correctly based on changed source code). The Python code depends on the C/C++ code via ctypes and cffi (which load a .so file produced by the C/C++ build), and there are no extension modules.
Python builds via [tool.hatch.build.targets.wheel.hooks.custom] in pyproject.toml and a hatch_build.py that invokes waf and force-includes the .so files into useful locations.
Use case 1: Development. I change something (C/C++ source, the waf configuration, etc) and then try to run Python code (via uv sync, uv run, or activating a venv with an editable install). Since there doesn't seem to be a way to have the build feed dependencies out to uv (this seems to be a deficiency in PEP 517/660), I either need to somehow statically generate cache-keys or resort to reinstall-package to get uv commands to notice when something changed. I can force the issue with uv pip install -e ., although apparently I can also force the issue with uv run/sync --reinstall-packages [distro name]. [0] So I guess uv pip is not actually needed here.
It would be very nice if there was an extension to PEP 660 that would allow the editable build to tell the front-end what its computed dependencies are.
Use case 2: Production
IMO uv sync and uv run have no place in production. I do not want my server to resolve dependencies or create environments at all, let alone by magic, when I am running a release of my software built for the purpose.
My code has, long before pyproject.toml or uv was a thing and even before virtual environments existed (!), had a script to build a production artifact. The resulting artifact makes its way to a server, and the code in it gets run. If I want to use dependencies as found by uv, or if I want to use entrypoints (a massive improvement over rolling my own way to actually invoke a Python program!), as far as I can tell I can either manually make and populate a venv using uv venv and uv pip or I can use UV_PROJECT_ENVIRONMENT with uv sync and abuse uv sync to imperatively create a venv.
Maybe some day uv will come up with a better way to produce production artifacts. (And maybe in the distant future, the libc world will come up with a decent way to make C/C++ virtual environments that don't rely on mount namespaces or chroot.)
[0] As far as I can tell, the accepted terminology is that the thing produced by a pyproject.toml is possibly a "project" or a "distribution" and that these are both very much distinct from a "package". I think it's a bit regrettable that uv's option here is spelled like it rebuilds a _package_ when the thing you feed it is not the name of a package and it does not rebuild a particular package. In uv's defense, PEP 517 itself seems rather confused as well.
uv needs to support creation of zipapps, like pdm does (what pex does standalone).
Various tickets asking for it, but they also want to bundle in the python interpreter itself, which is out of scope for a pyproject.toml manager: https://github.com/astral-sh/uv/issues/5802
(Case in point: pip-audit's CycloneDX emission was subtly incorrect for years, and nobody noticed[1].)
[1]: https://github.com/pypa/pip-audit/pull/981
reply