3

I have tried with two chat-bots Cleverbot and Alan, and I got disappointing results.

me: Socrates is a man

bot: blah blah (common bot nonsense instead of an "ok")

me: Who is a man?

alan1: The people that write my answers haven't provided an answer for this.

Another example of the mediocre "clever bot"

me: Socrates is the name of my dog.

clever bot: I don't know!

me: What is the name of my dog?

clever bot: That's a nice name.

Another example.

me: Socrates is a man.

clever bot: When does the narwhal bacon?

me: Who is a man?

clever bot: Men are man.

And they dare name this thing "clever"...

So, is there any chatbot that can actually answer this straightforward question?

nbro
  • 39,006
  • 12
  • 98
  • 176

2 Answers2

2

I think the issue here is that the chatbots you're using aren't very good at "short-term memory". What I mean by is that the bots construct responses that are slowly and incrementally tuned according to the overall usage of the chat bot, from every user. The bots are responding to each message based on how a new user would expect them to. As Alan1 notes, "Men are Man". It's making this response based solely off your single most recent message.

Instead, you are looking for a bot who focuses moreso on persistent memory of the individual conversation. The problem here becomes you're now almost asking for a Natural Language Parser, a big problem many people are working on and something that's years away from existing as robustly as you suggest.

The chat bot not only has to recognize the words 'Socrates', 'Name', and 'Dog'; but that in this sentence, it's the dog's name that is Socrates. That's a lot of information to gain beyond just the words. Which is why from a server / implementation standpoint, the above method is also a lot easier to program (every message just query your server, no need to maintain state - that is, memory of the conversation).

The chat bots can't possibly get enough information from one person to train how to speak and respond, so they 'crowd-source' that information for training. But that means that Clever Bot (or any similar caliber chat bot) won't respond in terms of parsing the meaning of what you're asking.

Taking this even further, one can consider the notion of such a program being Turing-Complete. Supposing we had a chat bot like you're suggesting, we could perhaps show equivalence to a Turing-Machine, or even perhaps show we can do something like decide the halting problem. Off the top of my head I imagine the procedure being basically showing you would be able to decide halting given initial conditions. E.g. Given "Socrates is a man" and "All men die" can we decide if the chat bot will ever be able to deduce if Socrates dies?

I'll work on a formal proof from the latter and post it if it works out.

Avik Mohan
  • 706
  • 3
  • 12
  • My opinion is that this type of algorithms can not even be called "artificial stupidity" because stupidity requires a tiny amount of intelligence to be present, and there is none in them. –  Mar 31 '17 at 22:31
  • Um I think I agree but are you sure this is the right thread @johnAm – Avik Mohan Mar 31 '17 at 22:48
  • I'm not sure I understand your last comment. Anyway I'm reading now the essay "Understanding natural language" by Terry Winograd. It's a great work... Thanks for your answer. –  Mar 31 '17 at 23:44
0

Just to lean your notice to specific problem. In my opinion, all your questions "Who is a man" (beside "Socrates dog..." one) are incorrect, and not a bot, but a human shouldn't answer them on first look, in spite of what you told it before ("Socrates is a man")... Yes, it might know that Socrates, John Kennedy or Leonardo DiCaprio is a man, but you just "expected" it to get the answer you wanted, because you mentioned it earlier.

During conversation, humans can catch & understand grammar, semantical or logical (the most important part) mistakes in another person's questions, and in return they can give correct answers, but bots depend mostly on correct grammar and correct semantics of questions.

There might appear some bots that could overcome incorrect grammar slightly, nowadays, even Googles Search's AI hardly struggles to correct a wrong input, other than basic grammar or basic semantics), and you supposed that chat bots could "understand" semantically incorrect question, and you rant them heavily with no reason and call stupidity or whatever.

T.Todua
  • 101
  • 2