0

Many researchers in deep learning research come up with new CNN architectures.

The architectures are (just) combinations of a few existing layers.

Along with their mathematical intuition, in general, do they visualize intermediate steps by execution and then (do trial and error) brute force for achieving the state-of-art architectures?

Visualizing intermediate steps refers to printing outputs in the proper format for analyzing them. Intermediate steps may refer to feature maps in CNN, hidden states in RNN, outputs of hidden layers in MLP, etc.

hanugm
  • 3,571
  • 3
  • 18
  • 50
  • This question seems to be related/similar to [this](https://ai.stackexchange.com/q/28059/2444), [this](https://ai.stackexchange.com/q/17055/2444) and [this](https://ai.stackexchange.com/q/6836/2444). – nbro Jul 30 '21 at 12:19
  • Moreover, note that the term "feature map" only typically applies to convolutional neural networks. If you're referring only to CNNs, it's better that you edit your post to clarify that. Also, what do you mean by "visualize intermediate steps", do you mean the outputs of a hidden layer? I also roughly understand what you mean by brute-force, but it may be better to given example to clarify what you mean in that case too. – nbro Jul 30 '21 at 12:23
  • Okay @nbro. I will improve the question. – hanugm Jul 30 '21 at 12:29

0 Answers0