Classification: binary
Model: CNN (ResNet50V2)
During our research we've had 91x109x91 images (3-dimensional). We've used 2D CNN to train and evaluate our images and make predictions on labelled cases, thus we had to convert 3D to 2D this way (slices (n*x), y, z, #color_channels):
maindata = maindata.reshape(n * 91, 109, 91, 3)
What we did here effectively is merge a number of images (n) with dimension x (in our case 91). So we basically trained our model on slices (2D images) of this 3D image. Thus far everything was good. We've received great results.
However, now we need to create prediction probabilities of n images (3D images) and not n x 91 slices (2D images). Can we take the prediction probabilities we've received (2D) and for every 91 prediction probabilities calculate the mean or is there a better way of interpreting 2D results to 3D images?
Both train (labelled) and test (unlabelled) data are shaped the same way, so the only part missing is interpretation to 3D.