I have a dataset in which the images which don't have the same width and height. How do I perform Image Classification with such images? I am trying as much as possible to steer away from image resizing because in my case it will lead to loss of important data/ give poor results.
Asked
Active
Viewed 266 times
1
-
check and if helpful upvote these answers on [fully convolutional networks](https://ai.stackexchange.com/questions/21810/what-is-a-fully-convolution-network) and [pyramid pooling](https://ai.stackexchange.com/questions/32300/best-practice-for-handling-letterboxed-images-for-non-fully-convolutional-deep-l) – Edoardo Guerriero Mar 23 '22 at 12:07
-
@EdoardoGuerriero checked it out, but to no avail. Any ideas or help specific to my question? – skinnybb Mar 23 '22 at 16:06
-
could you explain more in details the issue? Both fully convolutional networks and pyramid pooling were precisely designed to tackle the variable input size issue in neural networks so it's not clear to me what else constitute the problem aside from feeding variable sized images to a model. – Edoardo Guerriero Mar 23 '22 at 16:35
-
@EdoardoGuerriero I will need a fully connected layer in my network since my network has to output probabilities for a specified number of labels. So I will need resize. But what are alternatives to resize, because sometimes resize can cause you to lose data. – skinnybb Mar 27 '22 at 19:26
-
then again open and read the paper about pyramid pooling in the linked answer, it allows to map an image of generic size **without resizing or reshaping** to a dense layer of specified, fixed size. – Edoardo Guerriero Mar 28 '22 at 06:40