In short
I mentioned in another post, how the Artificial Neural Network (ANN) weights are a relatively crude abstraction of connections between neurons in the brain. Similarly, the random weight initialization step in ANNs is a simple procedure that abstracts the complexity of central nervous system development and synaptogenesis.
A bit more detail (with the most relevant parts italicized below)
The neocortex (one of its columns, more specifically) is a region of the brain that somewhat resembles an ANN. It has a laminar structure with layers that receive and send axons from other brain regions. Those layers can be viewed as "input" and "output" layers of an ANN (axons "send" signals, dendrites "receive"). Other layers are intermediate-processing layers and can be viewed as the ANN "hidden" layers.
When building an ANN, the programmer can set the number of layers and the number of units in each layer. In the neocortex, the number of layers and layer cell counts are determined mostly by genes (however, see: Human echolocation for an example of post-birth brain plasticity). Chemical cues guide the positions of the cell bodies and create the laminar structure. They also seem to guide long term axonal connections between distant brain regions. The cells then sprout dendrites in certain characteristic "tree-like" patterns (see: NeuroMorpho.org for examples). The dendrites will then form synapses with axons or other cell bodies they encounter along the way, generally based on the encountered cell type.
This last phase is probably the most analogous to the idea of random weight initialization in ANNs. Based on where the cell is positioned and its type, the encountered other neurons will be somewhat random and so will the connections to them. These connections are probably not going to be very strong initially but will have room to get stronger during learning (probably analogous to initial random weights between 0 and ~0.1, with 1 being the strongest possible connection). Furthermore, most cells are either inhibitory or excitatory (analogous to negative and positive weights).
Keep in mind this randomization process has a heavy spatial component in real brains. The neurons are small and so they will make these connections to nearby neurons that are 10-200 microns away. The long-distance connections between brain regions are mostly "programmed-in" via genes. In most ANNs, there is generally no distance-based aspect to the initialization of connection weights (although convolutional ANNs implicitly perform something like distance-based wiring by using the sliding window).
There is also the synaptic pruning phenomenon, which might be analogous to creating many low weight connections in an ANN initially (birth), training it for some number of epochs (adolescence), and then removing most low-weight connections (consolidation in adulthood).