Deporting criminals is in no way a new concept. In fact, it used to be commonplace to deport your own citizens, not just foreigners. The modern nation of Australia exists as a result of such a policy! Japan will certainly deport you for drug-related offenses or violent crimes, but like most places, white collar crime is not treated as "real crime", even though the impacts are usually more severe than a simple shoplifting or robbery.
Incidentally, if Germany had deported a foreigner who led an attempted coup d'etat, perhaps it would have saved tens of millions of lives. The things people get away with a slap on the wrist for...
It was literally branded Mt. Gox. In the logo and everything. Also, he had already shuttered the MTG project and simply re-used the dormant mtgox domain.
Interesting article which reinforces my decision to never engage with web development in any manner other than throwing WASM paint on a Canvas.
> But a purpose-built game framework like Unity would have polyfill to protect you from more of these layout and audio problems
Unity doesn't polyfill, it just relies on WASM for everything which results in significantly more consistent behaviour provided your browser supports WASM to begin with.
> Unity and Godot might be better choices, but I have no experience with them and I assume they only make sense for games.
Unity has been used for non-game purposes successfully, and there are also other WASM-compatible frameworks specifically targeting non-game GUI use cases.
Unity in the browser fails basic interactions like copy and paste. All WASM paint on an Canvas apps on the browser have similar issues with non-english input, accessability, integration with tihngs like dictionaries, password managers, etc...
To be clear, I am not suggesting the WASM canvas approach for ordinary web pages. WASM is for things that are unto themselves full-fledged applications, with the convenience of instant access to running them in a sandbox on any platform. The game described in the article certainly makes more sense with WASM than HTML5, as it uses web APIs for doing something that isn't displaying a standard web page but instead needs to conform to a specific set of characteristics to provide a consistent and polished user experience.
Also, while Unity doesn't, features like canvas copy-pasting can be implemented manually or by another framework. Non-English input works fine with WASM; if it doesn't render, it's because the application developer didn't include a font that supports those characters, since there's no fallback-font kind of thing going on. But this stuff is exactly the same as developing any kind of non-web application; if the framework doesn't provide a feature, you have to provide it yourself. It needs to be approached from an application development perspective rather than a web development perspective; you don't get the freebies of web, both the good and the bad, but this gives you much more control and capability.
The only reason to use the web is because it reaches everywhere. you then hoble it by making it not work in 2/3rds of the world you might as well have just shipped a native app.
Also, I think you're under estimating the amount of work required. No one wants your custom solutions. They want the OS solution. They want their OS IME that they use in every app, not the one you built from scratch for your "blat pixels to the page" app. They want their passkeys from their OS, which are only available via web features and are not available from "blat pixels to the page" apps. They want their auto correct to use all the words in their local OS dictionary but that's not available to "blat pixels to the page" apps. I could list several more issues like this
You appear to be misinformed. OS-level features like IMEs and dictionaries work in WASM as they do with any native application. There is nothing special about HTML elements that enables them in a way that doesn't work with WASM; they are, after all, OS features, not HTML features, and the browser is just another native application. If somebody chooses not to integrate OS features in their application, that's a reason not to use their application, but it really has nothing to do with WASM in any way whatsoever.
> The only reason to use the web is because it reaches everywhere. you then hoble it by making it not work in 2/3rds of the world you might as well have just shipped a native app.
I also think this is a ridiculously reductionist take. While I myself consider language support a priority (and incidentally, happen to believe for instance most Linux distros have long kneecapped the spread of Linux by not being accessible to non-Latin users) [and conversely believe WASM does not hobble language support, otherwise I would not be interested in it], that is not even close to "the only reason" to use the web. There are plenty of people in the world who host websites in only their native language. Even if you don't care to support foreign languages, deploying to the web gives you (a) instant cross-platform support with comparatively low effort and (b) zero friction to users, who can instantly use your application without an installation process which itself entails security risks. Would the use of WASM for a niche English text adventure put off Chinese users who might otherwise play the game if only it was written in HTML5? Probably not, they were never going to play the game in the first place.
It has been a very long time since I tried GIMP (>15 years) to remember everything I found wanting, but as I recall, GIMP lacks both macros and batch editing, the former letting you record a set of actions to a hotkey so you don't have to repeat them yourself all the time, and the latter letting you apply a set of actions to hundreds or thousands of images at once. I would literally have to spend hundreds of hours to do things in GIMP that can be done with no effort in Photoshop, to the point where it would actually be easier to just program something myself from scratch than it would be to use GIMP, if Photoshop didn't exist.
I see that GIMP has since gotten a UI revamp, but the multiple window UI from the time I used it was also unbearably bad and one of the main things that sticks out in my memory.
Have you looked into script-fu? It would probably be a very steep learning curve.. BUT there is an opportunity to do something impossible 10 years ago, and that is to use AI and an external application. BATCH-FU is one such attempt but it seems to be a 'select action from a menu' thing.
But Gimp developers: implementing batch in one go is a big ask I know. But a great first step might be to create a channel in Gimp where correct script-fu is emitted for operations in progress. Being able to connect to that from outside would allow 3rd party projects to assemble "record by doing" macros that could be turned into Photoshop-like batch capability.
Macros are on the roadmap (https://developer.gimp.org/core/roadmap/#macros-script-recor...), and in fact we did a lot of prepwork for them during 3.0's development (internally, several features like filters and plug-ins now have configs that store settings, which will be used by macros in the future to repeat operations).
Having everybody connected leads to homogenization of culture in some ways
The internet may hypothetically homogenize culture relative to a society that does not have any kind of mass communication at all, but relative to the world it was actually introduced into, the internet has completely balkanised the culture. Prior to the internet, we had television, cinema, literature, radio, and newspapers, which were all centralised and controlled enough that they created a shared monoculture in nations. A signifant portion of a country's population would watch, read, and listen to the same media. The internet bucked that trend, allowing all kinds of new subcultures to pop up and to more easily cross national boundaries.
Yeah, back in the day you would go to school the next day after a show that everyone watches released its new episode, it aired on the prime-time slot on the primary TV channel, and you'd discuss what happened in that episode, or have some references or new jokes. Created a common culture.
If your mission is serious and you're seriously asking people to donate €80,000, surely you could be bothered to take it seriously enough to write the appeal for funding yourself rather than outsourcing it to an LLM?
This isn't about charity — it's about building kinship-based infrastructure.
A few people have called us on this, but it's simply not true. The copy was all written by hand; the only machine assistance here is XCompose (you, too, can type an em-dash!).
It turns out that when everybody involved is some level of techie and you're targeting a sort of "corporate bland" tone so that you look professional and trustworthy by modeling your communications on other successful kickstarters, it comes out looking similar to what other marketing copy looks like, which is precisely what all of the LLMs were trained on.
The sad thing is, this is one of the rhetorical flourishes that gets drilled into you in debate and comms education. ChatGPT picked it up because it's extremely common.
Danbooru[1] and Danbooru-derived image boards handle this perfectly, and are a genuine pleasure to browse relative to the awful experience that is pinterest. There is empty space between images, and that is fine. You don't need to occupy every pixel in the screen to begin with, that's why we have these magical things called "margins", elements need room to breathe in the first place.
[1]https://safebooru.donmai.us/ (note: this is a "safe" subset of danbooru for reference, but it is still not safe for work)
How is that better? It's still a grid of images that seem to be constrained to a more or less rectangular grid. I'm thinking more of a dynamic grid where there is a mix of sizes of horizontal and vertical images.
The point being raised is that dynamic image grids don't actually make for a good UX. They might look more visually interesting at a superficial glance, but when you're actually using the interface to browse images, predictability wins out. Even having mixed-orientation images, where there is some degree of extra whitespace between images, does not change this. It is way easier to digest the content when your eyes can reliably scan one line at a time without having to bounce around everywhere to track the flow of the dynamic grid.
What is it with commenters in this thread and wanting to "reliably scan one line at a time?" When users use image galleries, they generally do jump around because they're looking at all the options on screen all at once. The eyes absorb everything and then they pinpoint what looks good. I've never seen or heard anyone go line by line in an image gallery or a newspaper layout and doing so I'd find to be highly abnormal to average users.
I suspect if data from eye-tracking tests were available, there would be an extremely clear revealed preference from users. I read image galleries the exact same way I skim text, in an ordered fashion that allows me to "read" every image without reading an image twice, stopping if my attention is caught by something in particular. Splotting garbage over the screen haphazardly makes it blend together annoyingly and results in my eyes traversing the same areas multiple times both to try to pick out details and to try to keep my place in what I have/haven't skimmed yet. It is a layout that itself demands my attention, rather than letting my attention be absorbed naturally by the actual images.
From actual eye tracking data via Hotjar and similar, people do skip around the page. Those that scan linearly are in the minority but probably are more highly represented on HN, just as a matter of course.
You mentioned a used model that is over 5 years old as an example of "a new computer", and "1k" as "not expensive for consumers". It is honestly impressive how well you undermined your own point.
> If enough consumers aren't able to use the website, then business wouldn't use it.
I sincerely doubt any business owner would approve of losing even 10% of their potential users/customers if they knew that was the trade-off for their web developer choosing to use this feature, but there are disconnects in communication about these kinds of things -- if the web developer even knows about compatibility issues themselves, which you would expect from any competent web developer, but there are a whole lot of incompetent web developers in the wild who won't even think about things like this.
Most web devs get screemed at (by their peer reviewers or [preferably] static analysis tools) if they use a feature which has less then like 98% support without gracefully denigrating it, and rightfully so.
But your GP is in a massive minority, if every developer would cater to 11 year old browsers we would be wasting a lot of developer time to inferior designs, with more hacks which brake the web for even more users.
I don't know about "most". For various reasons, I use a 2-year-old browser on a daily basis (alongside an up-to-date browser), and I routinely run into websites that are completely broken on the 2-year-old browser. Unrelated to outdatedness, I recently ran into a local government website that e-mailed me my password in plaintext upon account creation. I have no way of accurately quantifying whether "most" web developers fall into the competent or incompetent bucket, but regardless of which there are more of, there are a significant enough number of incompetent ones.
I think a very common browserlist target is "last 2 version, not dead, > 0.2%". So if you have a 2-year old browser you are probably dozens of versions behind and are very likely in that 2% of users which developers simply ignore.
Going back 2 versions, only ~50% of Chrome users are on v140 or newer. If you go back another 2 versions, that number increases to around ~66%. Going back another 2 versions only increases that to 68%, with no huge gains from each further 2 step jump. That you think your target gives you 98% coverage is concerning for the state of web developers, to say the least.
After checking further, almost 20% of Chrome users are on a 2+ year old version. If you handle that gracefully by polyfilling etc., fine. If you "simply ignore" and shut out 20% of users (or 50% of users per your own admission of support target), as I have encountered in the wild countless times, you are actively detrimental to your business and would probably be fired if the people in charge of your salary knew what you were doing, especially since these new browser features are very rarely mission-critical.
Note that the comma in browserlist queries are OR. So if any given browser version still has > 0.2% usage, it is included. This would include Chrome 109 which is three year old. Meaning developers with this browswerlist target would fail their static analysis / peer review (actually even a more reasonable > 0.5% still fails on Chrome 109) if they used a feature which Chrome 109 doesn’t support without graceful degradation or polyfill.
Furthermore the "baseline widely available" target (which IMO is a much better target and will probably become the recommendation pretty soon) includes versions of the popular browsers going back 30 months, meaning a competent team of web devs with a qualified QA process should not deliver software which won‘t work on your 2 year old browser.
I can‘t speak for the developers of the websites which break on your 2 year old browser... Maybe they don‘t have a good QA process. Or maybe you were visiting somebodies hobby project (personally I only target "baseline newly available" in my own hobby projects; as I am coding mostly for my own amusement). But I think it is a reasonable assumption that user tend to update their browsers every 30 months, and you won‘t loose too many customers if you occasionally brake things for the users which don’t.
I was specifically referencing desktop Chrome, not including Chrome for Android, but other than that, if there are discrepancies, I'm not sure what the cause is.
The Timo Tijhof data is based on Wikipedia visits, and shouldn't be affected by adblockers.
Meanwhile, StatCounter is based on sites that use its analytics, and on users not using adblockers that might block it. The CanIUse table makes clear there's a long tail of outdated Chrome versions that each individually have tiny usage, but they seem to add up.
It's fascinating they're so wildly different. I'm inclined to think Wikipedia, being the #9 site on the web [1], is going to produce a more accurate distribution of users overall. I can't help but wonder if StatCounter is used by a ton of relatively low-traffic sites, and the long tail of outdated Chrome is actually headless Chrome crawlers, and so they make up a large proportion relative to actual user traffic? Since they're not pushed to update, the way consumers are. And especially with ad-blocking real users excluded too?
Anecdotally, in web development I just haven't seen users complain about sites not working in Chrome, where it turns out the culprit is outdated Chrome. In contrast to complaints about e.g. not working in Firefox, which happen all the time. Or where it breaks in Chrome but it turns out it's an extension interfering.
As someone on the browsing end, I love Anubis. I've only seen it a couple of times, but it sparks joy. It's rather refreshing compared to Cloudfare, which will usually make me immediately close the page and not bother with whatever content was behind it.
Same here, really. That's why I started using it. I'd seen it pop up for a moment on a few sites I'd visited, and it was so quirky and completely not disruptive that I didn't mind routing my legit users through it.
Quite possibly. Or, in my case, I think it's more quirky and fun than weird. It's non-zero amounts of weird, sure, but far below my threshold of troublesome. I probably wouldn't put my business behind it. I'm A-OK with using it on personal and hobby projects.
Frankly, anyone so delicate that they freak out at the utterly anodyne imagery is someone I don't want to deal with in my personal time. I can only abide so much pearl clutching when I'm not getting paid for it.
The Digital Research Alliance of Canada (the main organization unifying and handling all the main HPC compute clusters in Canada) now uses Anubis for their wiki. Granted this is not a business, but still!
It’s a feature in the paid version, or I guess you could recompile it if you didn’t want to pay (but my guess is if you want to change the logo you can probably pay).
As someone on the hosting end, Anubis has unfortunately been overused and thus scrapers, especially Huawei ones, bypass it. I've gone for go-away instead which is similar but more configurable in challenges
My experience with it is that it somehow took 20 seconds to load (site might've been hn-hugged at the time), only to "protect" some fucking static page instead of just serving that shit in the first place rather than wasting CPU on... whatever it was doing to cause delay
Incidentally, if Germany had deported a foreigner who led an attempted coup d'etat, perhaps it would have saved tens of millions of lives. The things people get away with a slap on the wrist for...
reply