what you generally want is rather than lossy compression on top of dithering, to instead use lossy compression of higher bit depth data, and then the decoder can add the dithering to represent the extra bits.
Sure, of course, technically, but that circles back to @quietbritishjim’s point: if you have the bandwidth for, say, 8 bits per channel, then just display it without dithering. Dithering would reduce the quality. If you don’t have the bandwidth for 8 bits per channel, then dithering won’t help, it’s already too low and dithering will make it lower. In other words, dithering always reduces the color resolution of the signal, so when a compression constraint is involved, it’s pretty difficult to find a scenario where dithering makes sense to use. This is why dithering isn’t used for streaming, it’s mainly used to meet the input color resolution constraints of various media or devices, or for stylistic reasons.