Hacker Newsnew | past | comments | ask | show | jobs | submit | temac's commentslogin

Is this an admission that you accept to implement complete garbage?


Oh dear God in heaven and Ada Lovelace forgive me for the horrors I have wrought.


If I don’t, there are 100 other people who would do it


Tell your product owners that they should actually use the product they’re owning. And not just use it, but be a power user of that tool. Not a professional user, not a casual user; use the tool at least six hours a day.

I use YouTube 6+ hours a day and I have for probably 10 years, and I don’t even work there. (I have a few annoying personality limitations which make it so that I usually work better with YouTube on in the background, and NOT on autoplay, autoplay always chooses something I don’t want to see/hear; I know that because I use the tool a lot.)

I can tell you that it has steadily and continually gotten worse in that 10 year time. “I have to come up with stories or I won’t have a job” no you don’t, but even if you did, there are so many things YouTube needs more that enlarged thumbnails with visible compression artifacts.


>Tell your product owners that they should actually use the product they’re owning

I did. Not that anyone listened tho.


What shocked me in the aughts was how bad Lotus Notes was. I was pretty sure that the average IBM executive wasn't using the average version of it.

Using the most commonly version of the product, on the commonly used hardware, at least 2 days a week should be a prerequisite for every product owner.


>Using the most commonly version of the product, on the commonly used hardware, at least 2 days a week should be a prerequisite for every product owner.

I am a firm believer that the software should also be developed on commonly used hardware.

Your average user isn't going to have the top-of-the-line MacBook pro, and your program isn't going to be the only thing running on it.

It may run fine on your beefed up monstrosity, and you'll not feel the need to care about performance (worse: you may justify laggy performance with "it runs fine on my machine"). And your users will pay the price for the bloat, which becomes an externality.

Same for websites. Yes, you are going to have a hundred tabs open while working on your web app, but guess what - so will your users.

Performance isn't really product's domain, as in — they would always be happier with things being more snappy; they have to rely on the developer's word as to what's reasonable to expect.

And the expectation becomes that the software should and can only run fine on whatever hardware the developer has, taking all the resources available, and any optimization beyond that is costly and unnecessary.

Giving the devs more modest hardware to develop with (limited traffic/cloud compute/CPU time/...) solves this problem preemptively by making the developers feel the discomfort resulting from the product being slow, and thus having the motivation to improve performance without the product demanding it.

The product, of course, should also have the same modest hardware — otherwise, they'll deprioritize performance improvements.

----

TL;DR: overpowered dev machines turn bloat into an externality.

Make devs use 5+-year-old commodity hardware again.


"The Microsoft Store" app is such a strong example of what happens when nobody cares about performance. It misses UI events most of the time, regardless of what hardware it's running on. Although, in this case, I don't think a Pentium 166MHz would help. The UI event processing is just fundamentally flawed.

<flame=ON>

Usually, but not always, it ignores scroll events while an animation is playing…and hovering over a tile in the list cause a pointless zoom-in animation (the result of which occludes parts of adjacent tiles). Sometimes, the animation won't start immediately, but will still play. To prevent the cannot-scroll-while-animating problem, the only safe place for the mouse pointer is over the scrollbar.

Clicking the (completely invisible) track of the scrollbar has random multi-second delays.

Most of the search filters are hidden by default…and can't be shown without waiting for a slow animation. You can click the show-filters widget over 30 times if you're in a hurry, and still the animation hasn't even drawn the first frame. That delay before it starts means that even if you try to wait, you might click one extra time, and then see both the show-filters animation and then the hide-filters animation…all while none of the rest of UI responds. …And then you might realise you want to refine your search terms…which will reset all filters and re-hide the filter options.

Once you find a tile you want to click, be prepared for another two animation delay: one, if the tile isn't already zoomed in, and another while the app mysteriously animates a slew of placeholders instead of just dumping the items information directly into view. It's slow like a 33.6 moder on a noisy phoneline, but now you finely have details about the item you clicked on maybe 7 to 40 seconds ago.

Now maybe you click a screenshot to enlarge, and decide it wasn't the app for you. You hit your mouse's 'back' button or click the app's strangely tiny (given how freaking huge most of the UI is) back button. Nothing happens. You try again, potentially numerous times…because the app ignores those inputs while a screenshot is enlarged. The app's so unresponsive, it at first doesn't occur to you that no amount of waiting or retrying will help. No, you have to click the little close widget on the opposite side of the window, or 'back' will never mean 'back' again.

You try to go back to your search results. The app eventually responds, but decided to discard that data for some reason and has to play more placeholder animations while reloading it and rediscovering your scroll position.

Then you go into another search result and decide the sidebar of other apps people viewed has some interesting items. These don't have animations on the tiles or any details, so you have to click each one of interest, waiting for more placeholders while imagining modems noises and being outpaced by a Colorado glacier that's crossing the road. And when you page back, the item you just came from does /more/ animations while reloading everything via IP Over Avian Carrier With Quality Of Service.

But when burrowing through the people-also-viewed sidebars, don't go too many layers deep, or when you return to your search results, it will have forgotten your scroll position and turned of your search filters. Ah, time for more UI-blocking animations.

But that's okay, right? Nobody ever made an app that responds in milliseconds to every user input, right? And we all know that doing long, blocking operations on the UI thread is right and holy, right? Even routines single-threaded apps never need to yield to other code blocks or process interrupts, …right?

<flame=OFF>

<meta-flame>

Yes, I have reported this to MS via Feedback Assistant. A few times. No, I don't know why they haven't appeared to do anything about this unshippable pile random bits that somehow slopped out of the Bit Bucket.

"Rectify?" No, the only answer is “Games."

</>


I have never experienced any of this. It’s not a great app, but I’ve never had any problems like you’re describing. Or .. somehow I don’t remember them, but that seems unlikely; I’m always willing to dogpile on a shitty application, but I have to experience the things.


Thank you for your service o7

May your screams into the void be heard by the stakeholders, and not just people.


Those we literally words of Japanese Kamikazes in WW2. If not me, then someone else will die anyway.


It would be funny to compare suicide bombing to a dev implementing features their team is working on even if they don't sound good to that particular dev if it wasn't so sad and offensive.


Also those of German nazis doing their thing. Terrible excuse.


I'm sorry, but this cop-out really pisses me off. It is far too common and frankly, unacceptable. It really is insulting that you'd expect others to accept this as a justification. It's a lazy dismissal and not even a proper excuse.

You're excuse for doing something shitty is... that someone else will? What does another person even have to do with it?! Seriously, let them have the blood on their hands. You can't even assume that someone else will! If you do it, you guarantee that it happens. Even if it is likely that someone else will, there's a big difference between a certainty. This is literally what creates enshitification.

Plus, the logic is pretty slippery. Certainly you're not going to commit crimes or acts of genocide! You were "just following orders"[0], right? Or parents often say to their children "if everyone jumped off a cliff, would you?" Certainly the line is drawn somewhere, but frankly, it is the same "excuse" given when that extreme shit happened, so no, I won't accept it.

You have autonomy[1], that makes you accountable. You aren't just some mindless automata. You may not be the root cause, but at best you enable it. You can't ignore that you play a role.

And consider the dual: if you don't make it better, who will?

I believe you have the power to make change, do you? Maybe not big, but hey, every big thing is composed of many smaller things, right? So the question is which big thing you want to contribute to.

[0] https://en.wikipedia.org/wiki/Superior_orders

[1] https://talyarkoni.org/blog/2018/10/02/no-its-not-the-incent...


Unsure why downvoted. To many tech folks just shrug an go with the flow even if it's a bad flow _and they know it_.


I suspect for the same reason the comments like I responded to were made: liking my comment means accepting that you are a willing participant in creating shit/harm.

But I still stand, you aren't mindless automata and your actions matter.


I get paid a lot of money for it, so yes.


RT threads can be prempted by higher prio RT, and IIRC some kernel threads run at the highest prio. Plus you can be prempted by SMI, an hypervisor, etc


I dont know if the Python in Excel architecture as changed but last time i saw it, it was insane and unusable for me (data sent to MS servers where a linux container executes python: you need both a subscription and that the data in question not be regulated)


The USA spies all the time on everybody so by your own def they should be not really an ally of anybody.


I don't think the US is really first tier peer-allies with anyone, all friendly nations are on the spectrum of assets we utilize to cultural vassals that operate as reputationally independent extensions of the American empire.


I'm using OVH and the notion of it "just working" is all relative. It's tolerable, but certainly a bit buggy, and with far less services than aws and co. It is also cheap, but given the limitations I doubt they can increase the prices much...


The linux kernel already works perfectly fine with various base page sizes.


I've been trying to use auditing rules for a usage that seems completely in scope and obvious to prioritize from a security point of view (tracing access to EFS files and/or the keys allowing the access) and my conclusion was that you basically can't, the doc is garbage, the implementation is probably ad-hoc with lots of holes, and MS probably hasn't prioritised the maintenance of this feature since several decades (too busy adding ads in the start menu I guess)

The NT security descriptors are also so complex they are probably a little useless in practice too, because it's too hard to use correctly. On top of that the associated Win32 API is also too hard to use correctly to the point that I found an important bug in the usage model described in MSDN, meaning that the doc writer did not know how the function actually work (in tons of cases you probably don't hit this case, but if you start digging in all internal and external users, who knows what you could find...)

NT was full of good ideas but the execution is often quite poor.


From an NTFS auditing perspective, there’s no difference between auditing a non-EFS file or EFS file. Knowing that file auditing works just fine having done it many times, what makes you say it doesn’t work?


Rust is possible and proves that you don’t need "optimizations" to optimize, but that optimizations are actually possible. Now that's kind of irrelevent for most of the article focusing about constant versus variable time which is not really an "optimization" problem but already an optimization one, but at least putting appart this rust proves that a langage doesn't need to allow nasal daemons to get good perfs. You just apply the technics when you actually know they are correct, not when you speculate the existence of the mythical perfect programmer (where this hypothesis has actually be disproven by studies on the subject)


I specifically addressed the claim that compiler optimization are worthless. I did not addressed the other claims in the article.

In particular, however, Rust relies a lot on Undefined Behavior to optimize well. It manages to hide it (mostly) in the surface language, but in the IR they are necessary to perform well.


Probably when writing "hate AI" here the meaning was hate the often useless text chat bot. Google Photo face recognition was there before the new hipe and is probably not designated talked by primarily as "AI" by the general public.


People also like privacy, and most of the time "AI" means "your stuff is going to our servers".

I don't need or want that from most applications, especially a photo viewer.


> most of the time "AI" means "your stuff is going to our servers".

It almost always does but only because it is offered this way, it is not a hard technical requirement. There is nothing that would prevent offering a standalone version running locally for customers with their hardware powerful enough.


Most everyone I know in real life do not care about that kind of privacy. They happily upload anything to anywhere on the internet. The people who do care tend to fall in one of two categories: More tech inclined (programmers), or the same kind of person who doesn't use online banking because they are afraid of losing their money.


Yes, but it is. It’s a well-known problem.

https://en.wikipedia.org/wiki/AI_effect

Again, you’ve gotta remember that originally, playing chess was considered an unfathomable expression of machine thinking. Now it’s a problem trivially solved in CS112 class.

It’s a lot less magic when you see behind the curtain.


> that's one of the (inadequate) methods that the CA/BF permits for verification at issuance

Why inadequate (in the absolute)? This can be automated and let's encrypt allows verification through DNS, moreover this allows verification for wildcard certificates.

Now in this particular case maybe they should have gone through HTTP, and even automated with ACME. But there is nothing inadequate in the absolute in DNS verification. Besides allowing wildcard it also allows verification when you don't control the web server(s), when you don't even have a webserver at all, when the standard ports are occupied for something else, etc.


> Why inadequate (in the absolute)?

The point of X.509 certificates is that you can't rely on information you get either from the DNS or from the HTTP server. If you could, you wouldn't need the whole mess in the first place.

Sure, the verification helps, because you have to successfully fool both the client and the CA. But if you can fool one, there's a strong chance that you can fool the other. In the end, the CA is still relying on exactly the same information that the client isn't supposed to have to rely on.

The original idea behind X.509 was that verification would be "out of band", but that turned out to be expensive and non-scalable, so the X.509 world, including the CA/BF, resorted to this very weak kind of verification. They try to backstop it with stuff like certificate transparency, but it's just adding epicycles that aren't particularly reassuring.

If everybody used DNSSEC, then DNS-based verification would be OK. But at that point you ought to just distribute key hashes through the DNS and dispense with X.509 entirely. That's actually what should have happened, and probably what would have happened if X.509 hadn't still been such a cash cow at the times when the various standards solidified beyond all chance of improvement. Because of that "cash cow" status, there was a lot of obvious sabotage aimed at entrenching X.509 and fighting any attempt to improve the situation. And now we're stuck with it.


The Chrome team was pretty averse to implementing DANE as an alternative to the web PKI. I don't think I understood why, but their reluctance seemed to be what stopped its momentum (maybe outside of SMTP?).


'agl wrote a blog post about it. There were two big problems, one in principle and one practical.

The practical: you can't reliably run DNSSEC everywhere Chrome runs. Networks get really fucky with any even slightly unusual DNS messages.

The principle: because you can't realistically ever declare a "flag day" and deprecate the X.509 WebPKI, you have to support both systems, so DANE doesn't collapse your trust anchors down to a smaller set; it actually adds to the number of things you have to trust.


These are strong arguments.

It's really tragic that the Internet is so ossified. (Not just in this regard, but in many others.)


I'm more thinking about the pre-Chrome era. The DANE drafts weren't even written, let alone standardized, until it was already hard to move and really hard to get people to move. That slowness had a whole lot to do with obstructionism, FUD, and influence campaigning, from commercial interests, specifically Verisign, and perhaps from some anti-cryptography "national security" interests as well.

Admittedly DNSSEC is an essential prerequisite and wasn't really baked, but it was being delayed by essentially the same FUD. And DNSSEC got quite usable, from a pure technology readiness point of view, pretty fast once people started putting a little urgency behind it later on. It's still relatively hard to use, but that would evaporate in six months if there were some impetus for it to get more users.

Not that the browsers helped, mind you (Mozilla wasn't really any better). A switch to DNS as root of trust still could have been done when DANE was finished if there'd been any real will. Much less will than it takes to establish this or that bad-idea standard that's of no value to anybody but advertisers. Or for that matter to set up weird, mutant, off-the-wall, who-asked-for-that, solving-the-wrong-problem-in-the-wrong-way hackery like DNS over HTTP.

Still, by that time, the browsers could point not only at an entrenched system of practice, but also at a ton of broken, clearly-non-standards-compliant middleboxes on the network screwing up DNS, especially in "The Enterprise(TM)". Personally, I had, and still have, zero sympathy for the people who put those boxes there or put themselves behind them. I would have given them exactly zero consideration. But a lot of people somehow seem to think that one system's bad behavior obliges everybody else to bend over backwards to accommodate that system forever after.

There is actually one legitimate knock against DNSSEC: it makes replies bigger, which makes DNS a worse DoS amplifier. I think it's worth it, and if it weren't my reponse would probably have just been to start moving DNS over TCP. But at least there's a somewhat respectable argument there. Strangely, it was never the main argument you heard.


Ironically, DOS amplification is the one argument against DNSSEC I don't buy; you can already use DNS quite effectively as an amplifier, along with other protocols.

The fundamental problem with this whole line of argument though is that, even if things worked well (they did not; ask Slack) you're still just trading the WebPKI --- with all its warts --- for a system that is even less transparent, and that is de jure operated by world governments. There will, for instance, never be a "DANE Transparency" log; not only because DANE will never be deployed for real, but also because the market forces that coerced CAs into adopting CT don't exist in the DNS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: