> Funnily enough, everything ran at about the same speed as it does now.
Actually, where I was sitting on a decent PC with broadband Internet at the time, everything was much, much faster. I remember seeing a video on here where someone actually booted up a computer from the 2000's and showed how snappy everything was, including Visual Studio, but when I search YouTube for it, it ignores most of my keywords and returns a bunch of "how to speed up your computer" spam. And I can't find it in my bookmarks. Oh well.
Wow that was a great read, thank you. It's funny that it is already starting to break due to all of the links and ad tracking, which is another kind of rot.
Use Linux/KDE. None of the gains from the switch to SSDs have been lost. Everything is instant even on an n100. You only need something more powerful for compilation, gaming, or heavy multimedia (like the 200 Mbps 4k60 video my camera produces, which isn't accelerated by most processors because it's using 4:2:2 chroma subsampling).
Xfce or LXQt are also great alternatives, blazing fast even on 15 year-old hardware. Old hardware can be slow for basic web browsing and multimedia (e.g. watching videos with modern codecs) but other low-level uses are absolutely fine.
> "I remember seeing a video on here where someone actually booted up a computer from the 2000's and showed how snappy everything was, including Visual Studio"
Was it this one? Casey Muratori ranting about Visual Studio Debugger slowness, and he shows a video of Visual Studio opening and debugging faster on a single core Pentium 4 from 20 years earlier - https://youtu.be/GC-0tCy4P1U?t=2160
Or this one? Roger Clark developing Notepad in C++ on Windows 2000 and commenting how fast Visual Studio opens: https://youtu.be/Z2b-a4r7hro?t=491
Yeah, that's the one! And hmm, it's interesting that there's more than one video out there going "look how fast old computers were from a user standpoint". They've really been boiling the frog when it comes to terrible UX over the past 20 years.
Given the refinements to the hardware, the modern scale of manufacturing and accessible market, and the sheer amount of engineering manpower a tech company can bring to bear nowadays, you'd think standards would have risen into the stratosphere, but instead the tech consumer is cowed into accepting slow, buggy, abusive, invasive trash.
These days to get a snappy experience, one has to aggressively block everything and only selectively unblock the bare minimum, so that one doesn't get tons of bloat thrown in the direction of one's browser. Oh and forget about running JavaScript, because it _will_ be abused by websites. And then sites have the audacity to claim one is a bot.
Many websites are so shitty, they don't even manage to display static text, without one downloading tons of their JS BS.
I can't recall desktop application times, but I remember using the web in the 00s. Websites took a noticeable blip to load. Pictures took seconds to minutes to load, top to bottom, on my connection. Runescape took an hour to update on dial up.
I do remember applications like Microsoft word's UI constantly freezing though.
You can view shorts on desktop. In a browser on desktop, when you see one on your subscriptions tab (or the front page if you still use that), you can right-click and open in new tab. You can replace "shorts/" with "watch?v=" in the URL and load the same video in the regular UI. You don't have to scroll down.
Ah, I was wondering about that. Flagging didn't seem like quite the right thing to do, but at the same time I don't see a reason to leave bots hanging around.
Then you want something that directly copies Windows. Gnome doesn't do that; it very much does its own thing, which confuses regular people who typically just want an unenshittified Windows.
I don’t agree you have to copy Windows. I agree it’s nice to copy it to lure those who touched no other system and know only Windows. But actually, it’s good to have just simple and usable interface, and people would learn it no problem. There are not many things to learn after all. Gnome does that. Also, it’s very good as a simple macOS replacement. Even simpler than macOS itself. And when I install Gnome, I advert it as ‘very similar to macOS’ (good) ‘but without need to buy their expensive hardware’ (also good). And sometimes I say it’s simpler too (also good). Yes, there are situations when I’d like to have a 1-to-1 (visually) for cases when I want people to not notice they are using not Windows. But actually having something with just a simple interface is more valuable. For a Windows copycat, I use KDE, as it’s somewhat similar, more or so. I know there are ‘more Windows-like shells’ but I just don’t like them personally. Other systems, actually change a lot, including Windows itself. So that’s not an issue in my opinion.
> what's so wrong with X11 so people need to replace it
1. Security - Any program using X11 can read keystrokes, passwords, or the contents of any other window. Fixing this would break all existing X11 applications.
2. Performance - X11's client-server model doesn't work with hardware accelerated graphics, requiring hacks to get around. X11 is basically stuck with this legacy.
The ground-up re-design of X11 to fix those two issues is Wayland.
I 100% agree with the performance improvement goals, but I think the security claims are overblown, and overly cautious. I honestly don't understand the point of trying to implement the security boundary in the display manager. It solves one class of security issues, while breaking a lot of accessibility and automation. The display manager just shouldn't be enforcing rigid per processes security controls, that's better done further down the stack. Or at a minimum security controls should respect user freedom enough to let a user access normally restricted features, with out the all or nothing elevation to root. There's a middle ground here where we don't break the world, and they get their shiny security policies.
I think The fact that people e.g. run the ydotool service as root is an example of this. It's like making a safe that is so hard to open that people just drill a hole in the bottom; you end up with something less secure than a safe that was easier to open.
Where in the stack should it be enforced that my cute desktop clock doesn't pull a Copilot and takes a screenshot of the entire desktop every 15 seconds to send to a remote service?
A security in depth approach obviously. Run less, use vetted sources, when running suspect software execute in a properly sandbox context. Seriously what's the point of securing screenshot and key loggers if a malicious process has full access to the users home directory, auido stack, webcam and network?
If you can't trust the process don't run it. If you have to run it, isolate all of it.
Wayland gives you neither the freedom to safely tailor your security policy, nor the security guarantees to warrant its inflexibility.
If your system is already running malware, why wouldn't the malware use a privilege escalation exploit (which are relatively numerous on linux) to access your data rather than some X11 flaw which depends on their code getting started by the user?
Because it's not an x11 "flaw" or exploit, it's just how X works. I also just don't buy the whole "well other stuff has exploits too" mentality.
I mean, yeah, it does, maybe. So why bother creating a password to a service if their database is probably running Linux anyway and the rdbms is probably compromised and yadda yadda yadda. It's the kind of argument you can make for anything.
Also no - privilege escalation is not "numerous" on Linux. It's very difficult to do in practice. It's only really a problem on systems built on old kernels which refuse to update. But those will always be insecure, just like running Windows 7 will be insecure.
The fact that desktop Linux is all or nothing in terms of privilege escalation is a design issue, however arguably Wayland gives us the tools to be more granular. Android has a permission system that makes sense can it's display manager is definitely closer in design to Wayland than it is to x11.
Android's sandboxing is doing work for the benefit of the user. This is similar to how your web browser sandboxes JavaScript. Not every app needs access to my location and providing it access shouldn't require root. The Linux ecosystem understands this and it's why there is a large push for sandboxing models in software such as flatpak. Even if you disagree with Android at some level it hard to argue that users benefit from being able to control what the software they run is capable of doing. Otherwise we wouldn't have filesystem permissions to begin with, in the name of "freedom".
But them it's a question about how trustworthy an app is. Wouldn't it be better for software installed from your own distro repository to be fully trusted and require few or no security popups? After all, they are vetted to a much, much higher standard than any app store. Meanwhile flatpak apps and a random binary you've donwload get the full security isolation, because you can't trust third party devs.
That's not a scalable solution as not every piece of software can pay the packaging cost for every Linux distro. Maybe it's fine for core system software, but it's too difficult to expect that model to work for all software. Imagine if every website you interacted with needed to ship new website updates by packaging it and getting it vetted.
I think you still need a centralized distribution model even for things like flatpak to ensure some level of centralized auditing and revocation for software that has access to sensitive capabilities. However this doesn't necessarily need to be as large of a barrier for shipping updates as trying to package your software for a distro (and playing the game of trying to get your shared library versions aligned).
So we don't get the security benefits or accessibility. I'm not sure what is being solved. I'm all for a modern display system, I'm just not convinced the security claims are in anyway justified.
> 1. Security - Any program using X11 can read keystrokes, passwords, or the contents of any other window. Fixing this would break all existing X11 applications.
This is a feature not an issue. And it's how every computer that hasn't tried to take away control from its users has worked. I WANT my programs to be empowered to act on my behalf. If you want a gimped platform to run untrusted apps go buy a phone.
> 2. Performance - X11's client-server model doesn't work with hardware accelerated graphics, requiring hacks to get around. X11 is basically stuck with this legacy.
It works very well in fact. What you call hacks is something you probably cannot comprehend: Actual long term backwards compatibility. All programs should strive to have more of that, but especially central ones like the display server.
> The ground-up re-design of X11 to fix those two issues is Wayland.
It does limit what programs can do and breaks backwards compatibility, yes. That's the problem.
While Wayland has kind of been a disaster I disagree with your premise.
> 1. Security
Linux (and Unix before it) has always had security mechanisms built in. File permissions, setuid bits, namespaces. Any old program shouldn't be able to access /etc/passwd, and likewise the `<input type="password">` in my browser should be protected. Wayland's problem isn't that it tried to add security, but that the design and development process went horrifically wrong so that 17 years after the first release, people still have trouble with screen sharing.
> 2. Performance
The client-server model is obsolete and unsupported by modern applications (and nowadays there are easier ways to do remote GUI access) and keeping it was just a big pile of tech debt.
The problem wasn't that Wayland tried to fix things, it's that the process took 17 years and still isn't finished or particularly successful. My uneducated guess at why Wayland failed to succeed is that it went for extreme modularity and refused to say (back in like 2009) "Here's the security mechanism everyone has to use to take screenshots, etc. If this breaks your spacebar heater, sucks to be you." Rather than just define a single API where 99% of graphical applications and desktop utilities can do the sorts of things they already do on Windows and MacOS, and call it 1.0, instead they built a sprawling monument to bikeshedding and over-engineering.
> My uneducated guess at why Wayland failed to succeed is that it went for extreme modularity and refused to say (back in like 2009) "Here's the security mechanism everyone has to use to take screenshots, etc. If this breaks your spacebar heater, sucks to be you."
I tend to agree. I have long attributed the mess that is wayland-protocols in large part to the fact that they didn’t define a security mechanism or permission model in place from the start.
They seemed to assume, at first, that it was reasonable to prevent all programs from doing what any program could abuse. Had they instead acknowledged that some programs need to be granted the ability to take actions that otherwise risk insecurity, they wouldn’t have needed to try to distort the protocols to fit the lacking security model of Wayland (or, in some cases, wouldn’t have needed to circumvent Wayland entirely to achieve their ends).
I generously assume they simply considered a mechanism to grant permissions to be out of scope of the original spec, a merely horrific error and disastrous design flaw. If they were instead completely ignorant to the existence of screen recording, password managers, screen readers, and so on... inconceivable idiocy. Either way, as long as something like Wayland can happen, Windows has nothing to fear from Linux.
On the other hand, #1 makes it extremely difficult (if not impossible at all) to have a decent UI automation on Wayland. Sure, you can still do it if you're not leaving the terminal or a web browser, but anything else (including Electron apps) is a no-go. All the existing tools are written for X11.
The last time I looked into it, I found out I would have to deal with each compositor separately. On top of that, the target apps would have to be written with the new API in mind.
> 1. Security - Any program using X11 can read keystrokes ...
IIRC: this isn't quite correct, there was an extension called XACE that can block this. Probably Xorg didn't implement it and desktops didn't have support for it though.
> X11's client-server model doesn't work with hardware accelerated graphics
I really wish they would figure this out. It just feels like with Threadripper etc the time is perfect for a return of thin-clients/not-so-dummy terminals running X11-like applications over the network on a server. Especially for development where many of us are running under-powered laptops and could use the boost to compilation from a beefy machine.
There is nothing to figure out, unfortunately. The design of X11 precludes hardware acceleration; the hardware acceleration you see on a X11 desktop works by using extensions to entirely route around the X11 client-server model. To make it work they'd need to rethink the design. And they did -- that's how we got Wayland.
If you want thin client like behavior, you should look at stuff like Waypipe or just outright using RDP directly, both of which are much better at the job of "display remote graphical application on my computer" than X11's client-server design ever managed to accomplish.
If anything, the variety of alternative solutions for that today -- everything from single-app to high-res full-desktop game streaming -- are much more robust and viable on modern networks than the X approach ever was, even if it was a neat-o "freebie" thing that fell out of its design. You get what you pay for, I guess.
It's not really something to figure out, it's just X protocol design was made for when thin clients running over the network was assumed to be the future, it wasn't. Unfortunately though unlike Mac or Windows the migration to a better protocol has been ugly.
When you start to consider things such as HDR, hardware planes (important if you want energy efficient video decoding) etc, the protocol just doesn't make that kind of thing easy, compared to Wayland which does by it's use of surfaces etc.
It has already been figured out long ago, which is how we have hardware acceleration on X today. Wayland fanboys calling it a "workaround" does not change that it works.
Is the security extension from 1996, which has a section on keyboard security
and its crazy to me that this anyone can claim X11 can't be off loaded, which its been doing for decades. From all the crazy blt/pattern HW acceleration to GL/vulcan implementations to the fact that the entire server can be on the other side of a network pipe, meaning it could be anywhere, including entirely encapsulated on a graphics card/smart nic/etc.
And if your talking about the xlib serialization, that was largely fixed with XCB.
I think of anarchy as a theoretical end state, where power is perfectly distributed among each individual, but that this is less of an actually achievable condition and more of a direction to head in (and away from monarchy, where power is completely centralized).
Another nice thing about the Pi is that you know for sure there won't be any major Linux issues; the official distro is tested for that hardware and that hardware alone. I'm assuming the same will apply for Linux on the Steam Machine, whereas most of the time when I install Linux on a random PC, I have to debug some issue with audio/networking/video (which is less common these days, but I guess I'm unlucky).
Speaking of which, I recently bought a Ryzen Framework laptop assuming the recommended Linux distro would run smoothly, but unfortunately I hit a few glitches, including a really annoying amdgpu bug that keeps making the screen flicker. I might have to mess with kernel boot parameters. Disappointing.
Since the Steam Machine is meant as a consumer product, hopefully it will run Linux solidly, and that's a big plus for me. I wouldn't touch Windows with a 10 foot pole these days.
I have a Framework Ryzen AI 300 series. Had the screen flickering after a kernel update several weeks ago. Fix was to add "amdgpu.dcdebugmask=0x2" to the grub kernel cmdline. Running Fedora 43, fully up to date as of yesterday. I sadly can't find the official forum thread about it. Hope it helps though.
Doesn't Linux make similar demands of the compiler, just not for bitfields? And I seem to recall Linus having some choice words for the C Standard's tendency over the years to expand the domain of undefined behavior. I don't think the Linux devs have much patience for C thinking it can weasel out of doing what it's told due to some small print in the Standard.