0

According to the U-Net architecture image from the second page of the research paper (URL link) https://arxiv.org/pdf/1505.04597.pdf

How does the skip connection match its dimension to the same layer in the expansive path?

nbro
  • 39,006
  • 12
  • 98
  • 176
wooong
  • 1
  • 2
  • The image looks like the layers sizes (width and height) are the same. Do you mean feature dimension? – Cloud Cho Dec 15 '21 at 01:23
  • Can you please provide more details about your problem? I am familiar with U-net, but maybe not all people, so you may want to _briefly_ describe U-net. What is the expansive path? What do you understand about "skip connections" in this specific context? How are they defined? Have you read the paper? Why do you think those skip connections may not match the dimension of the layer in the expansive path? These are all questions that you should have answered in order to provide us more context about what your problem really is. See https://ai.stackexchange.com/help/how-to-ask – nbro Dec 15 '21 at 09:13

1 Answers1

0

Output of each layer in the upscaling block is of the same size as the input of corresponding convolution layer in the downscaling block after cropping the input's feature maps.

This is how the network is defined. Each conv layer in the downscaling block has a corresponding layer in upscaling block to which the skip connection is made to.Except for the layer in the middle(sometimes called the latent layer). This is the layer that separates the downscaling block from upscaling block as seen in the original paper.

So in short it's just the way network is designed. I doesn't use the whole feature map from corresponding layer in down sampling block.

For a reference you can see Ternaus Net. There they had to crop randomly to support the VGG encoder in the UNet structure.