1

I am trying to reproduce the paper Synthetic Petri Dish: A novel surrogate model for Rapid Architecture Search. In the paper, the authors try to reduce the architecture of an MLP model trained on MNIST (2 layers - 100 neurons) by initializing a motif network from it, that is, 2 layers, 1 neuron each, and extracting the sigmoid function. I have been searching a lot, but I have not found the answer of how can someone extract an 'architectural motif' from a trained neural network.

nbro
  • 39,006
  • 12
  • 98
  • 176
Perl Del Rey
  • 131
  • 5
  • Hi. Welcome to AI SE. Have you already looked into https://github.com/uber-research/Synthetic-Petri-Dish? If you find the answer in this Github repo or somewhere else, feel free to write a formal answer below to your own question ;) – nbro Jan 13 '21 at 17:04
  • @nbro Hello, thanks for welcoming me. Indeed I did but unfortunately they did not include the motif extraction part in their code. What they do only is display the motif training points - which they define in their paper as tuple of slope values and validation accuracy. – Perl Del Rey Jan 13 '21 at 17:10
  • Maybe the best thing to do is open an issue in the issue tracker of that repo, if nobody provides an answer meanwhile. – nbro Jan 13 '21 at 17:12

0 Answers0