In the paper Exploiting Open-Endedness to Solve Problems Through the Search for Novelty (2008), by Joel Lehman and Kenneth O. Stanley, which introduced the novelty search approach, it is written
Thus this paper introduces the novelty search algorithm, which searches with no objective other than continually finding novel behaviors in the search space.
and
instead of searching for a final objective, the learning method is rewarded for finding any instance whose functionality is significantly different from what has been discovered before
and
The novelty of a newly generated individual is computed with respect to the behaviors (i.e. not the genotypes) of an archive of past individuals whose behaviors were highly novel when they originated
Therefore, the goal of a novelty search is to search for novel behavior and not necessarily novel chromosomes (or genotypes).
In the experiments reported in the novelty search paper, the authors use neural networks to represent the policy that controls a robot that needs to navigate a maze, while evolving these neural networks with NEAT (a neuroevolution method) and a novelty metric (rather than a fitness metric, which is used in the original NEAT). In the same experiments section, Lehman and Stanley write
Thus, for the maze domain, the behavior of a navigator is defined as its ending position. The novelty metric is then the Euclidean distance between the ending positions of two individuals. For example, two robots stuck in the same corner appear similar, while one robot that simply sits at the start position looks very different from one that reaches the goal, though they are both equally viable to the novelty metric.
Therefore, the evolution of the neural networks, which represent the controllers, is not necessarily guided by the novelty of (the architecture of) the neural networks but by the novelty of the behavior generated by the neural networks, even though novel neural networks might correspond or lead to novel behaviors.