Given an input image and an angle I want the output to be the image rotated at the given angle.
So I want to train a neural network to do this from scratch.
What sort of archetecture do you think would work for this if I want it to be lossless?
I'm thinking of this archetecture:
256x256 image
--> convolutions to 64x64 image with 4 channels
--> convolutions to 32x32 image with 16 channels and so on
until a 1 pixel image with 256x256 channels.
And then combine this with the input angle, and then a series of deconvolutions back up to 256x256.
Do you think this would work? Could this be trained as a general rotation machine? Or is there a better archetecture?
I would also like to train the same archetecture to do other transforms.