Hacker Newsnew | past | comments | ask | show | jobs | submit | phs2501's commentslogin

I highly doubt (and some quick checks seem to verify that) any of the tiny CC implementations will support the cleanup extension that most of this post's magic hinges upon.

(Agree on your other points for what it's worth.)


TinyCC supports cleanup[1], onramp[2] supports C2y defer (which is a superset), slimcc[3] supports both.

[1] https://godbolt.org/z/hvj9vcncG

[2] https://github.com/ludocode/onramp

[3] https://github.com/fuhsnn/slimcc



The archive.org variation appears to be missing both samples and descriptions.

A ported version got made over at: https://ishkur.kenxaj.cyou/

Appears to have all the music files and descriptions from a cursory inspection, although missing Tools, Samples, and Sounds.

Gives credit to Ishkur, recommends checking v3, and was made using: https://github.com/igorbrigadir/ishkurs-guide-dataset/


That's what I get for stopping at the front page after it looked just like I remember. :/

Thanks for the other link.


Usually that means the record was mastered differently (because you literally physically can't make a record as "loud" as a CD).

It's not the CD's fault, it's the mastering engineers.


Yeah it depends on where the producer expects the CD to be played.

99% of music is made to be played on radio / in car etc., a noisy environment, where you don't want to be adjusting the volume knob all the time. So the dynamics are stripped in mastering phase.

Music that gets pressed on vinyls isn't mastered for car-play, but home stereo equipment, so it makes more sense to have larger dynamic range.

CDs have objectively lower noise floor (less hissing), and more dynamic range (difference between loudest and quietest note), but it's the mastering that usually destroys the sound. And nothing can be done about it on consumer end. Except find a less remastered version of the album in a thrift store that isn't scratched to oblivion.

There's really no reliable way to tell if a CD is going to have high dynamic range, except perhaps niche audiophile studios like https://www.stockfisch-records.de/sf12_start_e.html, but https://dr.loudness-war.info/ has fantastic list of records with their dynamic ranges, so you can check before you buy, and you can also explore and find new stuff to use to listen to your speakers ;)


Does anyone know of a Wayland WM/compositor that does multi-screen like XMonad? Preferably out of the box but I'll take configurable.

For those unaware, though I doubt you're reading this thread if so, I want n desktops that are shared between all screens, not desktops _assigned_ to particular screens. If I summon a desktop on screen 1 and it's currently displayed on screen 2, they should swap.

Ideally also does layouts kind of like xmonad too, not "here's a tiling tree structure and a bunch of commands to manually manage it".


Qtile. It has the corresponding layouts, too.

https://docs.qtile.org/en/stable/manual/ref/layouts.html#mon...


> If I summon a desktop on screen 1 and it's currently displayed on screen 2, they should swap.

At least i3's (and I imagine sway's) config is sufficiently flexible for that. Here's a shell function that brings the workspace you specify to your current output:

  i3_bring_workspace_to_focused_output() {
    local workspace="$1"
    i3 "
      workspace $workspace 
      move workspace to output $(
        i3-msg -t get_workspaces |
        jq -r '.[]|select(.focused).output'
      )
    "
  }
You can turn that into an executable and have it be called through a keybinding.


> If I summon a desktop on screen 1 and it's currently displayed on screen 2, they should swap.

This is due to a limitation of X11 where a window can't be in two places at once. In theory, Wayland compositors can duplicate the desktop (at the cost of, like, not letting applications know their own window coordinates which seems pretty bad).


> at the cost of, like, not letting applications know their own window coordinates which seems pretty bad

I'm pretty sure Wayland applications already don't know their coordinates.


Exactly.


That's... fine, and a cool trick I guess, but I don't actually want that behavior.


Sway can more or less do it, and I've switched from XMonad to Sway but configured similar to XMonad's default method of operation.

I haven't managed to get it quite right though. For example Sway doesn't seem to be willing to move a workspace to a different monitor if it is empty.


Yeah, I switched from XMonad (which I used for over a decade) to Sway a few years back. Spent some time trying to duplicate the XMonad behaviour but eventually just realized that spending a few hours getting used to the Sway approach and slightly changing my workflow was a lot easier.


I just taught myself to look at the end of the UUID, rather than the beginning.


I believe Synology runs btrfs on top of regular mdraid + lvm, possibly with patches to let btrfs checksum failures reach into the underlying layers to find the right data to recover.

Related blog post: https://daltondur.st/syno_btrfs_1/


That was very interesting reading, thanks!


Why does this still have the ridiculous iMac chin? Surely they can fit everything behind the screen at this point.


The chin gives you a good touch-point for adjusting the angle of the display and the rotation angle of the entire base, without having to worry about touching the screen/screen bezel and getting finger prints on it.

It's also a great place to tack post-it notes.


Sticking notes...Not everyone understands how necessary this is for some people


No chin can be adjusted fine on basically any other display on the market today.


You can do that with a regular monitor too


Of course you can, but its nicer with a larger surface area to lever on.

There's a reason ergotron puts handles on many of its monitor mounts.


I think they keep the chin because it's the only thing that visually indicates that this is an iMac and not a monitor, and thus worth more than $500.


It makes a lot more sense if you look at the iFixit teardown. https://www.ifixit.com/Teardown/iMac+M1+24-Inch+Teardown/142...


Does it though?

The iMac is basically the same as the M4 iPad Pro, and the iPad Pro doesn't have a chin.


> The iMac is basically the same as the M4 iPad Pro, and the iPad Pro doesn't have a chin.

Cooling seems like it might be a factor here. The iMac's display is probably going to be run at a brighter (and thus hotter) setting AND it's more likely to be used to do things that require high load for extended periods of time, so putting it in its own space probably helps.


Yes that's mostly the reason. But considering there are report of display issues like we used to on poorly cooled Intel iMac (those things would get to 90 degree at the PSU, being over 50 degrees on the aluminum case outside) I would say this design is largely a failure.

They should just have separated everything in the foot, that would have made sense. Some sort of modern Sunflower iMac if you will. But Apple is more obsessed with thinness than practical design, so we get an impossibly thin iMac will all the flaws that brings...


iMac has active cooling, more ports and more power available to it to drive those ports (though the PSU is external, it’s still gotta have the internal circuitry to deliver that).

Those all do have to go somewhere.


They literally can't. They moved the headphone jack from the back to the side because it was too long.

Now you could argue if it needs to be that thin but for the current configuration, there's nothing you can cram behind the screen.


For something that's literally designed to sit on a desk, yes... it's ridiculous to make it thinner in a dimension you never see vs one that you see all the time.


Aesthetics is also for the environment of the object rather than the primary user. That’s the reason the logo is on the back


One more vote for aesthetics here. I put a lot of effort into making my home beautiful. iMacs respect/complement that effort for me.


Many of these are customer service desks which are visible from the side.


iMac has always been a device to be seen with, if not for the user then for the manufacturer.


From the ifixit teardown of the previous M1 model [1], it seems that all the compute is going in the chin.

They can't put the compute in the back of the display itself, while maintaining the same thickness like an iPad (which has the same CPU), because the room behind the displays is dominated by the speaker system, allowing the iMac to have surprisingly good audio quality despite being so thin.

[1] https://www.ifixit.com/Teardown/iMac+M1+24-Inch+Teardown/142...


Surely we are beyond concern with bezels, chins, and other frivolous mobile phone aesthetics at this point.


Someone got into their mind that it was important that everything is as thin as possible - hence the chin.

I miss the times when they used the form factor to actually make new shapes - both the sunflower and the cube looks more futuristic than the 2024 iMac.


where do you put ur sticky notes?


These look hideous tbh. I'm waiting for the iMac to flip vertically and ask me to tip.


Rust is a step sideways if anything. Yeah, you don't have manual memory management headaches in .NET, but you also don't have Rust's fairly strong compile-time guarantees about memory sharing and thread safety.

Which enables stuff like rayon where you can basically blindly replace map with parallel map and if it compiles, it _should_ be safe to run.

(I'm not super familiar with the .NET ecosystem, so it's quite possible there's equivalent tooling for enforced thread safety. I haven't heard much about it though, if so.)


FWIW .NET has many alternatives to popular Rust packages. Features provided by Rayon come out of box - it's your Parallel.For/Each/Async and arr.AsParallel().Select(...), etc. Their cost is going to be different, but I consistently find TPL providing pretty fool-proof underlying heuristics to make sure even and optimal load distrbituion. Rayon is likely going to be cheaper on fine-grained work items, but for coarse grained ones there will be little to no difference.

I think the main advantages of Rust are its heavier reliance on static dispatch when e.g. writing iterator expressions (underlying type system as of .NET 8 makes them equally possible, a guest language could choose to emit ref struct closures that reference values on stack, but C# can never take a such change because it would be massively breaking, a mention goes to .NET monomorphizing struct generics in the exact same way it happens in Rust), fearless concurrency, access to a large set of tools that already serve C/C++ and confidence that LLVM is going to be much more robust against complex code, and of course deterministic memory reclamation that gives it signature low memory footprint. Rust is systems-programming-first, while C# is systems-programming-strong-second.

Other than that, C# has good support for features that allow you to write allocation-free or manual memory management reliant code. It also has direct counterpart to Rust's slice - Span<T> which transparently interoperates with both managed and unmananged memory.


> but C# can never take a such change because it would be massively breaking

Out of interest, why?


Unfortunately there is no short answer to this. But the main gist is that improving this to take advantage of all the underlying type system and compiler features would require a new API for LINQ, improvements for generic signature inference in C# (and possibly Rust-like associated types support), and introducing a similar new API to replace regular delegates, used by lambdas, anonymous functions, etc. with "value delegates" dispatched by generic argument to methods accepting them, with possibly a lifetime restriction of 'allows ref struct' which is a new feature that clarifies that a T may be a ref struct and is not allowed to be boxed, as it can contain stack references or references to a scope that would be violated by moving to heap.

There have been many projects to improve this like https://github.com/dubiousconst282/DistIL and community libraries that reimplement LINQ with structs and full monomorphization, but the nature of most projects written in C# means their developers usually are not interested or do not need the zero-cost-like abstractions, which limits the adoption, and for C# itself it would need to evolve, and willingly accept a complete copy of existing APIs in LINQ with new semantics, which is considered, and I agree, a bad tradeoff where the simpler cases can be eventually handled through compiler improvements, especially now that escape analysis is back on the menu.

Which is why, in order to "properly" provide Rust-like cost model of abstractions as the first-class citizen, only a new language that targets .NET would be able to do so. Alternatively, F# too has more leeway in what it compiles its inferred types to, but its a small team and as a language F# has different priorities as far as I know.


Thank you! Very interesting


Yeah it was specifically the (presumed) lack of Rust's "fearless" concurrency that I was referring to... i.e. we can ram this data through a parallel map, but is it actually safe to do?

(And of course the flip side of Rust here is that you need to be able to figure out how to represent your code and data to make it happy, which provides new and interesting limitations to how you can write stuff without delving into "unsafe" territory... something something TANSTAAFL)

Good info though; thanks!


> we can ram this data through a parallel map, but is it actually safe to do?

Most of the time - it is, sort of. As in, accessing types that are not meant for concurrent access may lead to logic bugs, but the chance of this violating memory safety is almost nonexistent with the standard library and slim with community ones (for example, such library may use a native dependency which itself is not thread-safe, usually it's clear whether this is the case or not but the risk exists).

The common scenarios are well-known - use Interlocked.Add instead of +=, ConcurrentDictionary<K, V> instead of a plain one, etc. .AsParallel() itself already is able to collect the data in parallel - you just use .ToArray and call it a day.

Other than that, most APIs that are expected to be used concurrently are thread-safe - from the top of my head: HttpClient, Socket, JsonSerializerOptions, Channel<T> and its Reader/Writer can be shared by many threads (unless you specify single reader/writer on construction to reduce synchronization). Task<T> can be awaited multiple times, by multiple threads too. A lot of C# code already assumes concurrent execution, and existing concurrency primitives usually reduce the need for explicit synchronization. Worst case someone just slaps lock (instance) { ... } on it and gets on with their life.

This is to say, Rust provides watertight guarantees in a way C# is simply unable to, and when you are writing low-level code, you are on your own in C# where-as Rust has your back. But in other situations - C# is generally not known to suffer from race conditions and async/await usually allows to flow the data in a linear fashion in multi-tasking code, allowing the underlying implementation to do synchronization for you.


As @neosunset says we have a lot of good options but I've not come across anything which strictly guarantees thread safety. In practise issues are uncommon and easy to identify / fix.

Honestly I'm more interested in code contracts than Rust, as they allow you to make a set of statements of your system which can then be validated statically. I've had very good results using them (and am forever grateful to the colleague who introduced me to them)... and I am interested in Rust (having dabbled with it, only I'm yet to use it in a paying or production project).

So sideways as you said - with C# you can usually rely on the GC for memory management, have fast compile times and what I feel is a more flexible model.. and top tier tooling and a huge and wide range of libraries. Rust can be much more efficient and has much better language-level properties wrt threat safety and a more mature story around native code.

I'd use Rust for system-level code or places where I'd otherwise think of using C++. I keep having discussions with people who want to use Rust for everything - CLI tools, web services (which are doing pumping), business logic and the like. It's getting tiring. The trade-offs are bad. C#, Go and Java are all far better suited & cover pretty much all the niches.

Does Rust have anything to spot resource leaks (i.e. the infamous IDisposable in C#)?


A local station had (I think past tense, though it made it a lot less likely for me to go there to check) their pumps playing ads in "attract mode" when nobody was using them. So going there late at night and filling up involved listening to a poorly-timed round of "BUY NOW" utterances from eight different sources (because of course they weren't synchronized). And you couldn't really mute it because it was all the other pumps.

It was horrible.


Same, which is one reason I'm dreading the eventual Xpocalypse, as I have not found a drop-in replacement for Xmonad in the Wayland universe. (Particularly, the way Xmonad deals with multiple monitors and desktops just makes sense to me.)


You can get very close to the Xmonad experience with Qtile. It's the most flexible tiling WM in terms of tiling modes, I believe.

Python config and runs under either X11 or Wayland.

https://docs.qtile.org/en/latest/manual/ref/layouts.html#mon...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: