I am currently using Nvidia GTX1050 with 640 CUDA cores and 2GB GDDR5 for Deep Neural Network training. I want to buy a new GPU for training, but I am not sure how much performance improvement I can get.
I wonder if there is a way to roughly calculate the training performance improvement by just comparing GPUs' specification?
Assuming all training parameters are the same. I wonder if I can roughly assume the training performance improvement is X times because the CUDA core number and memory size increased X times?
For example, Is RTX2070 with 2304 CUDA cores and 8GB GDDR6 roughly 4 times faster than GTX1050? And is RTX2080Ti with 4352 CUDA cores and 11GB GDDR6 roughly 7 times faster than GTX1050?
Thanks.