How does one deal with images that are too large to fit in the GPU memory for doing ML image analysis?
I am interested in detecting small structures on images which are themselves many GB in size. Beyond simple downsampling and maybe doing patch-based analysis, what other modern techniques do people employ to analyze such images in modern machine-learning pipelines?