> but I even remember some IHV saying that this level of control isn’t always a good thing.
Because that control is only as good as you can master it, and not all game developers do well on that front. Just check out enhanced barriers in DX12 and all of the rules around them as an example. You almost need to train as a lawyer to digest that clusterfuck.
> The hardware (and its driver) could then decide what’s optimal and how to turn that into pixels on the screen.
We should go in the other direction: have a goddamn ISA you can target across architectures, like an x86 for GPUs (though ideally not that encumbered by licenses), and let people write code against it. Get rid of all the proprietary driver stack while you're at it.
The problem with DX12/Vulkan isn’t just that “low-level control is hard”, it’s that a lot of performance-critical decisions are now exposed at a level where they’re extremely GPU- and generation-specific. The same synchronization strategy, command ordering, or memory usage can work great on one GPU and badly on another.
A GPU ISA wouldn’t fix that, it would push even more of those decisions onto the developer.
An ISA only really helps if the underlying execution and memory model is reasonably stable and uniform. That’s true for CPUs, which is why x86 works. GPUs are the opposite: different wave sizes, scheduling models, cache behavior, tiling, memory hierarchies, and those things change all the time. If a GPU ISA is abstract enough to survive that, it’s no longer a useful performance target. If it’s concrete enough to matter for performance, it becomes brittle and quickly outdated.
DX12 already moved the abstraction line downward. A GPU ISA would move it even further down. The issues being discussed here are largely a consequence of that shift, not something solved by continuing it.
What the blog post is really arguing for is the opposite direction: higher-level, more declarative APIs, where you describe what you want rendered and let the driver/hardware decide how to execute it efficiently on a given GPU. That’s exactly what drivers are good at, and it’s what made older APIs more robust across vendors in the first place.
So while a GPU ISA is an interesting idea in general, it doesn’t really address the problem being discussed here.
Because that control is only as good as you can master it, and not all game developers do well on that front. Just check out enhanced barriers in DX12 and all of the rules around them as an example. You almost need to train as a lawyer to digest that clusterfuck.
> The hardware (and its driver) could then decide what’s optimal and how to turn that into pixels on the screen.
We should go in the other direction: have a goddamn ISA you can target across architectures, like an x86 for GPUs (though ideally not that encumbered by licenses), and let people write code against it. Get rid of all the proprietary driver stack while you're at it.