Now, the following may sound silly, but I want to do it for my better understanding of performance and implementation of GPU inference for a set of deep learning problems.
What I want to do is to replace a surface texture for a 3d model by a NN that stores that texture data in some way and allows to infere the rgb color of an arbitrary texel from its UV coordinates. So basically it should offer the same functionality as the texture itself.
A regular texture lookup takes a UV coordinate and returns the (possibly filtered) RGB color at these texture coordinates.
So, I want to train a network that takes two floats in [0,1] range as input and outputs three floats of rgb color.
I further want to then train that network to store my 4096x4096 texture. So the training data I have available are 4096*4096=16777216 of <float2, float3>
pairs
Finally I want to evaluate the trained network in my (OpenGL 4 or directX11) pixel shader, feeding it for every rendered pixel the interpolated UV coordinates at this pixel and retrieving the RGB value from it.
It's clear that this will
- have lower fidelity than just using the texture directly
- use likely more memory than just using the texture directly
- be slower than using the texture directly
and as such may be silly to do, but I'd still like to try to do this somewhat optimally, especially in terms of inference performance (I'd like to be able to run it at interactive framerates at 1080p resolutions).
Can someone point me to a class of networks or articles or describe a model and training algorithm that would be well suited for this task (especially in terms of implementing inference for the pixel shader)?