In order to check, whether the visitor of the page is a human, and not an AI many web pages and applications have a checking procedure, known as CAPTCHA. These tasks are intended to be simple for people, but unsolvable for machines.
However, often some text recognition challenges are difficult, like discerning badly, overlapping digits, or telling whether the bus is on the captcha.
As far as I understand, so far, robustness against adversarial attacks is an unsolved problem. Moreover, adversarial perturbations are rather generalizable and transferrable to various architectures (according to https://youtu.be/CIfsB_EYsVI?t=3226). This phenomenon is relevant not only to DNN but for simpler linear models.
With the current state of affairs, it seems to be a good idea, to make CAPTCHAs from these adversarial examples, and the classification problem would be simple for human, without the need to make several attempts to pass this test, but hard for AI.
There is some research in this field and proposed solutions, but they seem not to be very popular.
Are there some other problems with this approach, or the owners of the websites (applications) prefer not to rely on this approach?