1

Can someone explain why training CNN model (in my case DenseNet201) on the same data, and the same data processing pipeline can be slower on better GPU (RTX3090) than worse one (RTX3060), with the same other parameters (exactly same PC just with new GPU)?

In both cases I used same batch size and other settings. The only way to make training faster on 3090 was to actually increase the batch size, which was too big for 3060. But I still don't understand why the same training params wouldn't produce better results.

Even though big part of the training is reading data from disk and data augmentation (albumentations in my case) it's still the same setup, so even if GPU work is smaller part of one entire epoch, it still should be a bit faster, right?

GKozinski
  • 1,240
  • 8
  • 19

1 Answers1

1

Every training will be slightly different because of the statistical matter of neural networks.

The question is, how much is your better?

Then, newer does not imperatively mean better. It means more hardware and computational power, but not necessarily how it is used for specific tasks.

Maybe, the 3090 can't leverage its power in this specif task properly.

Skobo Do
  • 51
  • 6
  • But isn't image classification with CNN model a pretty standard task in DL? From your answer I reckon that if a person or a company wants to increase their computational power to speed up their work, they will have to buy couple of different GPUs and test each of their different task's models on each of these? Is this really how it works? Am I suppose to have 3 different PCs and for each task always find the best one? Buying card that has more computational power isn't enough anymore? – GKozinski Oct 20 '22 at 14:05
  • @GKozinski I can't really say more. I have no insight in your set-up etc. I just gave you some general hints. Maybe check your drivers, or the task is so trivial that the higher power of the 3090 becomes irrelevant. – Skobo Do Oct 20 '22 at 18:56