I can't say if I am the exception or the norm, but I do get to use some of the models on those papers from time to time. Over the past two years, I did a few proof-of-concept projects, so I got to work with some less-common stuff. Out of the GAN Papers to Read list, I did play with #1, #2, #3 (Pix2Pix), #6, #7, #8 and #10 (tried quite some of the losses listed here).
One of the occasions I used all this research was back when I wrote my paper on GANs with pixel art. However, not everything I coded paid off. For instance, the Perceptual Loss and Spectral Norm only marginally improved my results (likely because of the pixel-art aspect), so I didn't include that on the paper.
However, although I do get to mess with a lot of research content, I got to say that, in practice, you almost always end up using something rather common, such as ResNet, UNet and Bert. Most of the time, the fancy stuff only gets you an extra 1% improvement, but gives you 300% more headaches (such as slower training time and more code to test).
Also, quite many papers don't really work really well in practice. For instance, there is quite some literature on high-resolution GANs. Yet, most day-to-day problems that could benefit from GANs are low resolution, so all the changes people propose for high-res generation really don't make much practical sense.
Here is a link to the story I wrote about my paper on GANs and PixelArt, in case you are interested: https://towardsdatascience.com/painting-pixel-art-with-machine-learning-5d21b260486