One example that springs to mind was a vendor antivirus system that unpacked email attachments into /tmp - generally when it died on its arse the only way to figure out why was to dig into /tmp and look at what it had left, then try and infer backwards to the culprit email from there.
Yes, the problem isn't disk usage, the problem is that if journald's writes get too slow a watchdog timeout will cause systemd to assume it's crashed and shoot it in the head, which leaves the journal part written, which means on restart the new journald process throws away the old journal as corrupt.
(this may have been fixed in the last couple of years, but it leaves me somewhat untrusting of it in terms of actually being able to read my logs)
If I'm not mistaking journald periodically deletes old entries to keep its disk usage under control.