Yes, reshaping operations aren't very theoretically interesting, they just make the data compatible with the following operations.
For example, if you have a 1D array of pixels, and you want to do a 2D convolution, you can (not with unsqueeze specifically) reshape that array into 2D so the 2D convolution code knows where the rows and columns are. You could write 2D convolution code that works on a 1D array of pixels, or you could just make it 2D and then use the normal 2D convolution code.
Same with unsqueeze. Perhaps you want to feed a 2D array into a convolution function that expects the last dimension to be channels. You can add a last dimension of 1 and now that code can see there's 1 channel. Or you want to pass one data sample through a function that takes batches. You can add a first dimension of 1 meaning there's only one item in the batch. Adding or removing dimensions of size 1 is free, since it doesn't change the data, only the interpretation of the data.
If you wanted to convert a greyscale image to RGB (but still grey) you might use unsqueeze followed by repeat_interleave to duplicate that 1 channel into 3.
It may be worth noting that Tensorflow has a very generic "reshape" operation which lets you convert any shape of tensor into any other shape of tensor, as long as the total number of elements is the same.