My experience is that if AI creates the mess, AI should clean it up, and it usually can, if you put it in a suitable agent loop that does a review, hands off small, well defined cleanup steps to an agent, and runs test suites.
If you review the first-stage output from the AI manually, you're wasting time.
You still need to review the final outputs, but reviewing the initial output is like demanding a developer hands over code they just barely got working and pointing out all of the issues to them without giving them a chance to clean it up first. It's not helpful to anyone unless your time costs the business less than the AI's time.
That's categorically not true, as long as there's a human reviewer at the end of the chain. It can usually continue to deliver actual improvements over several iterations (just like a human would).
That does not mean you can get away with not reviewing it. But you can most certainly with substantial benefit defer reviewing it until an AI review thinks the code doesn't need further refinement. It probably still does need refinement despite the AI's say so - and sometimes it needs throwing away -, but it's also highly likely in my experience to need less, and take less time to review.
If you review the first-stage output from the AI manually, you're wasting time.
You still need to review the final outputs, but reviewing the initial output is like demanding a developer hands over code they just barely got working and pointing out all of the issues to them without giving them a chance to clean it up first. It's not helpful to anyone unless your time costs the business less than the AI's time.