I highly doubt (and some quick checks seem to verify that) any of the tiny CC implementations will support the cleanup extension that most of this post's magic hinges upon.
Yeah it depends on where the producer expects the CD to be played.
99% of music is made to be played on radio / in car etc., a noisy environment, where you don't want to be adjusting the volume knob all the time. So the dynamics are stripped in mastering phase.
Music that gets pressed on vinyls isn't mastered for car-play, but home stereo equipment, so it makes more sense to have larger dynamic range.
CDs have objectively lower noise floor (less hissing), and more dynamic range (difference between loudest and quietest note), but it's the mastering that usually destroys the sound. And nothing can be done about it on consumer end. Except find a less remastered version of the album in a thrift store that isn't scratched to oblivion.
There's really no reliable way to tell if a CD is going to have high dynamic range, except perhaps niche audiophile studios like https://www.stockfisch-records.de/sf12_start_e.html, but https://dr.loudness-war.info/ has fantastic list of records with their dynamic ranges, so you can check before you buy, and you can also explore and find new stuff to use to listen to your speakers ;)
Does anyone know of a Wayland WM/compositor that does multi-screen like XMonad? Preferably out of the box but I'll take configurable.
For those unaware, though I doubt you're reading this thread if so, I want n desktops that are shared between all screens, not desktops _assigned_ to particular screens. If I summon a desktop on screen 1 and it's currently displayed on screen 2, they should swap.
Ideally also does layouts kind of like xmonad too, not "here's a tiling tree structure and a bunch of commands to manually manage it".
> If I summon a desktop on screen 1 and it's currently displayed on screen 2, they should swap.
At least i3's (and I imagine sway's) config is sufficiently flexible for that. Here's a shell function that brings the workspace you specify to your current output:
> If I summon a desktop on screen 1 and it's currently displayed on screen 2, they should swap.
This is due to a limitation of X11 where a window can't be in two places at once. In theory, Wayland compositors can duplicate the desktop (at the cost of, like, not letting applications know their own window coordinates which seems pretty bad).
Yeah, I switched from XMonad (which I used for over a decade) to Sway a few years back. Spent some time trying to duplicate the XMonad behaviour but eventually just realized that spending a few hours getting used to the Sway approach and slightly changing my workflow was a lot easier.
I believe Synology runs btrfs on top of regular mdraid + lvm, possibly with patches to let btrfs checksum failures reach into the underlying layers to find the right data to recover.
The chin gives you a good touch-point for adjusting the angle of the display and the rotation angle of the entire base, without having to worry about touching the screen/screen bezel and getting finger prints on it.
> The iMac is basically the same as the M4 iPad Pro, and the iPad Pro doesn't have a chin.
Cooling seems like it might be a factor here. The iMac's display is probably going to be run at a brighter (and thus hotter) setting AND it's more likely to be used to do things that require high load for extended periods of time, so putting it in its own space probably helps.
Yes that's mostly the reason. But considering there are report of display issues like we used to on poorly cooled Intel iMac (those things would get to 90 degree at the PSU, being over 50 degrees on the aluminum case outside) I would say this design is largely a failure.
They should just have separated everything in the foot, that would have made sense.
Some sort of modern Sunflower iMac if you will.
But Apple is more obsessed with thinness than practical design, so we get an impossibly thin iMac will all the flaws that brings...
iMac has active cooling, more ports and more power available to it to drive those ports (though the PSU is external, it’s still gotta have the internal circuitry to deliver that).
For something that's literally designed to sit on a desk, yes... it's ridiculous to make it thinner in a dimension you never see vs one that you see all the time.
From the ifixit teardown of the previous M1 model [1], it seems that all the compute is going in the chin.
They can't put the compute in the back of the display itself, while maintaining the same thickness like an iPad (which has the same CPU), because the room behind the displays is dominated by the speaker system, allowing the iMac to have surprisingly good audio quality despite being so thin.
Someone got into their mind that it was important that everything is as thin as possible - hence the chin.
I miss the times when they used the form factor to actually make new shapes - both the sunflower and the cube looks more futuristic than the 2024 iMac.
Rust is a step sideways if anything. Yeah, you don't have manual memory management headaches in .NET, but you also don't have Rust's fairly strong compile-time guarantees about memory sharing and thread safety.
Which enables stuff like rayon where you can basically blindly replace map with parallel map and if it compiles, it _should_ be safe to run.
(I'm not super familiar with the .NET ecosystem, so it's quite possible there's equivalent tooling for enforced thread safety. I haven't heard much about it though, if so.)
FWIW .NET has many alternatives to popular Rust packages. Features provided by Rayon come out of box - it's your Parallel.For/Each/Async and arr.AsParallel().Select(...), etc. Their cost is going to be different, but I consistently find TPL providing pretty fool-proof underlying heuristics to make sure even and optimal load distrbituion. Rayon is likely going to be cheaper on fine-grained work items, but for coarse grained ones there will be little to no difference.
I think the main advantages of Rust are its heavier reliance on static dispatch when e.g. writing iterator expressions (underlying type system as of .NET 8 makes them equally possible, a guest language could choose to emit ref struct closures that reference values on stack, but C# can never take a such change because it would be massively breaking, a mention goes to .NET monomorphizing struct generics in the exact same way it happens in Rust), fearless concurrency, access to a large set of tools that already serve C/C++ and confidence that LLVM is going to be much more robust against complex code, and of course deterministic memory reclamation that gives it signature low memory footprint. Rust is systems-programming-first, while C# is systems-programming-strong-second.
Other than that, C# has good support for features that allow you to write allocation-free or manual memory management reliant code. It also has direct counterpart to Rust's slice - Span<T> which transparently interoperates with both managed and unmananged memory.
Unfortunately there is no short answer to this. But the main gist is that improving this to take advantage of all the underlying type system and compiler features would require a new API for LINQ, improvements for generic signature inference in C# (and possibly Rust-like associated types support), and introducing a similar new API to replace regular delegates, used by lambdas, anonymous functions, etc. with "value delegates" dispatched by generic argument to methods accepting them, with possibly a lifetime restriction of 'allows ref struct' which is a new feature that clarifies that a T may be a ref struct and is not allowed to be boxed, as it can contain stack references or references to a scope that would be violated by moving to heap.
There have been many projects to improve this like https://github.com/dubiousconst282/DistIL and community libraries that reimplement LINQ with structs and full monomorphization, but the nature of most projects written in C# means their developers usually are not interested or do not need the zero-cost-like abstractions, which limits the adoption, and for C# itself it would need to evolve, and willingly accept a complete copy of existing APIs in LINQ with new semantics, which is considered, and I agree, a bad tradeoff where the simpler cases can be eventually handled through compiler improvements, especially now that escape analysis is back on the menu.
Which is why, in order to "properly" provide Rust-like cost model of abstractions as the first-class citizen, only a new language that targets .NET would be able to do so. Alternatively, F# too has more leeway in what it compiles its inferred types to, but its a small team and as a language F# has different priorities as far as I know.
Yeah it was specifically the (presumed) lack of Rust's "fearless" concurrency that I was referring to... i.e. we can ram this data through a parallel map, but is it actually safe to do?
(And of course the flip side of Rust here is that you need to be able to figure out how to represent your code and data to make it happy, which provides new and interesting limitations to how you can write stuff without delving into "unsafe" territory... something something TANSTAAFL)
> we can ram this data through a parallel map, but is it actually safe to do?
Most of the time - it is, sort of. As in, accessing types that are not meant for concurrent access may lead to logic bugs, but the chance of this violating memory safety is almost nonexistent with the standard library and slim with community ones (for example, such library may use a native dependency which itself is not thread-safe, usually it's clear whether this is the case or not but the risk exists).
The common scenarios are well-known - use Interlocked.Add instead of +=, ConcurrentDictionary<K, V> instead of a plain one, etc. .AsParallel() itself already is able to collect the data in parallel - you just use .ToArray and call it a day.
Other than that, most APIs that are expected to be used concurrently are thread-safe - from the top of my head: HttpClient, Socket, JsonSerializerOptions, Channel<T> and its Reader/Writer can be shared by many threads (unless you specify single reader/writer on construction to reduce synchronization). Task<T> can be awaited multiple times, by multiple threads too. A lot of C# code already assumes concurrent execution, and existing concurrency primitives usually reduce the need for explicit synchronization. Worst case someone just slaps lock (instance) { ... } on it and gets on with their life.
This is to say, Rust provides watertight guarantees in a way C# is simply unable to, and when you are writing low-level code, you are on your own in C# where-as Rust has your back. But in other situations - C# is generally not known to suffer from race conditions and async/await usually allows to flow the data in a linear fashion in multi-tasking code, allowing the underlying implementation to do synchronization for you.
As @neosunset says we have a lot of good options but I've not come across anything which strictly guarantees thread safety. In practise issues are uncommon and easy to identify / fix.
Honestly I'm more interested in code contracts than Rust, as they allow you to make a set of statements of your system which can then be validated statically. I've had very good results using them (and am forever grateful to the colleague who introduced me to them)... and I am interested in Rust (having dabbled with it, only I'm yet to use it in a paying or production project).
So sideways as you said - with C# you can usually rely on the GC for memory management, have fast compile times and what I feel is a more flexible model.. and top tier tooling and a huge and wide range of libraries. Rust can be much more efficient and has much better language-level properties wrt threat safety and a more mature story around native code.
I'd use Rust for system-level code or places where I'd otherwise think of using C++. I keep having discussions with people who want to use Rust for everything - CLI tools, web services (which are doing pumping), business logic and the like. It's getting tiring. The trade-offs are bad. C#, Go and Java are all far better suited & cover pretty much all the niches.
Does Rust have anything to spot resource leaks (i.e. the infamous IDisposable in C#)?
A local station had (I think past tense, though it made it a lot less likely for me to go there to check) their pumps playing ads in "attract mode" when nobody was using them. So going there late at night and filling up involved listening to a poorly-timed round of "BUY NOW" utterances from eight different sources (because of course they weren't synchronized). And you couldn't really mute it because it was all the other pumps.
Same, which is one reason I'm dreading the eventual Xpocalypse, as I have not found a drop-in replacement for Xmonad in the Wayland universe. (Particularly, the way Xmonad deals with multiple monitors and desktops just makes sense to me.)
(Agree on your other points for what it's worth.)