Both are transfer learning approaches, which this Pytorch tutorial explains very well:
https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
In practice, very few people train an entire Convolutional Network
from scratch (with random initialization), because it is relatively
rare to have a dataset of sufficient size. Instead, it is common to
pretrain a ConvNet on a very large dataset (e.g. ImageNet, which
contains 1.2 million images with 1000 categories), and then use the
ConvNet either as an initialization or a fixed feature extractor for
the task of interest.
These two major transfer learning scenarios look as follows:
Finetuning the convnet: Instead of random initialization, we
initialize the network with a pretrained network, like the one that is
trained on imagenet 1000 dataset. Rest of the training looks as usual.
ConvNet as fixed feature extractor: Here, we will freeze the weights
for all of the network except that of the final fully connected layer.
This last fully connected layer is replaced with a new one with random
weights and only this layer is trained.
In summary layer freezing is faster, but it is less accurate (after enough training) than finetuning.