24

On the Wikipedia page about AI, we can read:

Optical character recognition is no longer perceived as an exemplar of "artificial intelligence" having become a routine technology.

On the other hand, the MNIST database of handwritten digits is specially designed for training and testing neural networks and their error rates (see: Classifiers).

So, why does the above quote state that OCR is no longer exemple of AI?

nbro
  • 39,006
  • 12
  • 98
  • 176
kenorb
  • 10,423
  • 3
  • 43
  • 91

3 Answers3

25

Whenever a problem becomes solvable by a computer, people start arguing that it does not require intelligence. John McCarthy is often quoted: "As soon as it works, no one calls it AI anymore" (Referenced in CACM).

One of my teachers in college said that in the 1950's, a professor was asked what he thought was intelligent for a machine. The professor reputedly answered that if a vending machine gave him the right change, that would be intelligent.

Later, playing chess was considered intelligent. However, computers can now defeat grandmasters at chess, and people are no longer saying that it is a form of intelligence.

Now we have OCR. It's already stated in another answer that our methods do not have the recognition facilities of a 5 year old. As soon as this is achieved, people will say "meh, that's not intelligence, a 5 year old can do that!"

A psychological bias, a need to state that we are somehow superior to machines, is at the basis of this.

16

Although OCR is now a mainstream technology, it remains true that none our methods genuinely have the recognition facilities of a 5 year old (claimed success with CAPTCHAs notwithstanding). We don't know how to achieve this using well-understood techniques, so OCR should still rightfully be considered an AI problem.

To see why this might be so, it is illuminating to read the essay "On seeing A's and seeing AS" by Douglas Hofstadter.

With respect to a point made in another answer, the agent framing is a useful one insofar as it motivates success in increasingly complex environments. However, there are many hard problems (e.g. Bongard) that don't need to be stated in such a fashion.

NietzscheanAI
  • 7,206
  • 22
  • 36
4

I'm not sure if predicting MNIST can be really considered as an AI task. AI problems can be usually framed under the context of an agent acting in an environment. Neural nets and machine learning techniques in general do not have to deal with this framing. Classifiers for example, are learning a mapping between two spaces. Though one could argue that you can frame OCR/image classification as an AI problem - the classifier is the agent, each prediction it makes is an action, and it receives rewards based on its classification accuracy - this is rather unnatural and different from problems that are commonly considered AI problems.

Jacky
  • 41
  • 3