A robust ML model is one that captures patterns that generalize well in the face of the kinds of small changes that humans expect to see in the real world.
A robust model is one that generalizes well from a training set to a test or validation set, but the term also gets used to refer to models that generalize well to, e.g. changes in the lighting of a photograph, the rotation of objects, or the introduction of small amounts of random noise.
Adversarial machine learning is the process of finding examples that break an otherwise reasonable looking model. A simple example of this is that if I give you a dataset of cat and dog photos, in which cats are always wearing bright red bow ties, your model may learn to associate bow ties with cats. If I then give it a picture of a dog with a bow tie, your model may label it as a cat. Adversarial machine learning also often includes the ability to identify specific pieces of noise that can be added to inputs to confound a model.
Therefore, if a model is robust, it basically means that it is difficult to find adversarial examples for the model. Usually this is because the model has learned some desirable correlations (e.g. cats have a different muzzle shape than dogs), rather than undesirable ones (cats have bow ties; pictures containing cats are 0.025% more blue than those containing dogs; dog pictures have humans in them more often; etc.).
Approaches like GANs try to directly exploit this idea, by training the model on both true data and data designed by an adversary to resemble the true data. In this sense, GANs are an attempt to create a robust discriminator.