0

I'm using a transfert-style based deep learning approach that use VGG (neural network). The latter works well with images of small size (512x512pixels), however it provides distorted results when input images are large (size > 1500px). The author of the approach suggested to divide the input large image to portions and perform style-transfert to portion1 and then to portion2 and finally concatenate the two portions to have a final large result image, because VGG was made for small images... The problem with this method is that the resulting image will have some inconsistent regions at the level of areas where the portions were "glued". How can I correct these areas ? Is there an alternative approach to this dividing method ? thanks

  • You could add more layer to each end in order to downscale and upscale the images. But you would have to retrain the network. You could try downscaling your images and then use another network or a simple method to upscale them if this is what you desire. – Al rl Sep 11 '20 at 13:55
  • I would like to use the same network if possible – jeanluc Sep 12 '20 at 21:01

0 Answers0