I was asked an interesting question today by a student in a cybersecurity and information assurance program related to getting spammed by chatbots on snapchat. He's tried many conventional means of blocking them, but he's still getting overwhelmed:
- Theoretically, are there lines of code that could disrupt processing, such as commands or syntactic symbols?
My sense is no — the functions would be partitioned such that linguistic data would not execute. But who knows.
- Many programmers are sloppy.
- I've had friends in video game QA produce controller inputs that programmers claim is impossible — until demonstrated.
- Theoretically, is it possible to "break" a chatbot in the sense of the Voight-Kampff test thought experiment?
This was, of course, popularized via one of the most famous films on AI, BladeRunner, adapted from one of the most famous books, ElectricSheep, and extended recently via WestWorld. In these contexts, it's a psychological test designed to send the automata into loops or errors.
My question here is not related to "psychology" as in those popular media treatments, but linguistics:
- Are there theoretically linguistic inputs that could send an NLP algorithm into infinite loops or produce errors that halt computation?
My guess is no, all the way around, but still a question potentially worth asking.