14

Currently, within the AI development field, the main focus seems to be on pattern recognition and machine learning. Learning is about adjusting internal variables based on a feedback loop.

Maslow's hierarchy of needs is a theory in psychology proposed by Abraham Maslow that claims that individuals' most basic needs must be met, before they become motivated to achieve higher-level needs.

  • What could possibly motivate a machine to act?
  • Should a machine have some sort of DNA-like structure that would describe its hierarchy of needs (similar to Maslow's theory)?
  • What could be the fundamental needs of a machine?
Archana David
  • 277
  • 2
  • 9
Aleksei Maide
  • 251
  • 2
  • 14
  • 1
    Interesting question, and welcome to AI! (I have a few thoughts on the subject, related to game theory, and other contributors have talked about [goal oriented learning](http://cognet.mit.edu/book/goal-driven-learning) in relation to algorithms.) – DukeZhou Aug 27 '17 at 20:25

6 Answers6

5

The current method to implement motivation is some kind of artificial reward. Deepmind's DQN for example is driven by the score of the game. The higher the score, the better. The AI learns to adjust its actions to get the most points and therefore the most reward. This is called reinforcement learing. The reward motivates the AI to adapt its actions, so to speak.

In a more technical term, the AI wants to maximize utility, which depends on the implemented utility function. In the case of DQN, this would be maximizing the score in the game.

The human brain functions in a similar fashion, although a little more complex and often not as straight forward. We as humans usually try to adjust our actions to produce a high output of dopamine and serotonin. This is in a way similar to the reward used to control AIs during reinforcement learning. The human brain learns which actions produce the most amount of those substances and finds strategies to maximize the output. This is, of course, a simplification of this complex process, but you get the picture.

When you talk about motivation, please don't mix it up with consciousness or qualia. Those are not required for motivation at all. If you want to discuss consciousness and qualia in AI, that's a totally different ball game.

A child isn't curious for the sake of curiosity. It gets positive reinforcement when exploring because the utility function of the child's brain rewards exploration by releasing rewarding neurotransmitters. So the mechanism is the same. Applying this to AI means defining a utility function that rewards new experiences. There is no inner drive without some kind of reinforcing reward.

nbro
  • 39,006
  • 12
  • 98
  • 176
Demento
  • 1,684
  • 1
  • 7
  • 26
  • in regards to the edit i think a good example of "a utility function that rewards new experience" would be the novelty search fitness functions proposed by ken stanley to be used in his neat algorithm. – nickw Oct 07 '19 at 20:18
5

This is an interesting question actually.

There's a quite realistic idea about "where can the curiosity originate from" in the book "On intelligence" written by Jeff Hawkins and Sandra Blakeslee.

It's based on such statements:

  • Mind creates its own model of the world it exists in.

  • It makes predictions about everything all the time (actually Jeff Hawkins states that this is the main characteristic of intelligence).

  • When prediction about something wasn't followed by appropriate behavior of the world, then this thing gets very interesting to the mind (the model is wrong and should be corrected) and needs more attention.

For example, when you look at left human eye your brain predicts that it's a human face and there should be second eye to the right. You look to the right and see a.. nose! What a surprise! It now takes all your attention and you have this motivation to make more observations about such a strange thing that did not fit into your model.

So I'd say that AI might do something certain according to its model or behave randomly while the predictions it is making about the world are true. But once some prediction is broken the AI gets a motivation to do error-correction to its model.

In a simple case a machine starts at a total randomness just doing everything it can with its output. While it has no model or a random model when it detects some kind of order or repeated patterns it is getting "interested" and adds it to the model. After a while, the model becomes more sophisticated making more complex predictions and detecting higher level mistakes in a model. Slowly it gets to know what to do to observe something interesting to it, instead of just remembering everything.

Ivan Bogush
  • 109
  • 6
  • Thank you for the contribution! I have come to basically the same conclusions... now thinking of a way to implement it :) – Aleksei Maide Sep 15 '17 at 09:38
  • This answer makes an important point. Error correction on prediction models would provide a great incentive for an intelligent AI to learn and act in curious manner. – Seth Simba Jan 16 '18 at 09:21
3

I asked professor Richard Sutton a similar question, in the first lecture of the reinforcement learning course. It seems that there are different ways to motivate the machine. In fact, machine motivation seems to me like a dedicated field of research.

Typically, machines are motivated by what we call an objective function or a cost function or a loss function. These are different names for the same concept. Sometimes, they are denoted by

$$L(a)$$

The goal of the machine is then to solve either a minimization problem, $\min_a L(a)$, or a maximization problem, $\max_a L(a)$, depending on the definition of $L$.

nbro
  • 39,006
  • 12
  • 98
  • 176
A.Rashad
  • 251
  • 1
  • 14
1

I've spent some time thinking about this in the context of games.

The problem with reward functions is that they generally involve weighting nodes, which is useful but ultimately materially meaningless.

Here are two materially meaningful rewards:

COMPUTATIONAL RESOURCES

Consider a game where an AI is competing not for points, but for processor time and memory.

The better the algorithm performs at the game, the more memory and processing it has access to. This has a practical effect--the more resources available to the automata, the stronger its capabilities. (i.e. it's rationality is less bounded in terms of time and space to make a decision.) Thus the algorithm would be "motivated" to prevail such a contest.

ENERGY

Any automata with a sufficient degree of "self awareness", here specifically referring to the knowledge that it requires energy to process, would be motivated to self-optimize its own code to eliminate unnecessary flipping of bits (unnecessary energy consumption.)

Such an algorithm would also be motivated to ensure its power supply so that it can continue to function.

DukeZhou
  • 6,237
  • 5
  • 25
  • 53
1

I think we give ourselves too much credit by already referring to our algorithms and machines as actually thinking and acting on motivations. In my opinion we still have a bit to go before we can actually refer to a human creation as thinking or being able to have motivations more then basic physical ones.

By that I would say that a Machines' or AI algorithms' motivations are similar to a car engine. Simple and basic, the "motivations" of a car engine to run are just the first and second laws of thermodynamics, namely the conservation of energy and the exchange between energy types, and the always increasing level of entropy in a closed system.

By having a really specific design, we can insert fuel in the system and create a lot of potential energy which will "motivate" the engine to transform it in other types of energy (heat, sound, etc.)

An AI algorithm is exactly the same, it's just that now we're playing with electricity. By putting multiple levels of abstractions from the actual level of electrons moving through wire, up to your python Deep Learning algorithm training to learn how to recognize images of dogs. The concept is similar in my opinion, for now we do not have machines that are complex enough to have higher-level motivations, or even develop them by themselves.

As the other answers pointed out, specific algorithms, namely reinforcement learning try to emulate those "needs" and "motivations", but in the end in my opinion, for now, they are still just emulations. Similar to other Deep learning algorithms, the same basic concept described at the beginning applies, trying to minimize the error by emulating different concepts that we know, as conservation of energy, following the path of least resistance, maintaining the laws of entropy, etc.

0

Good verses bad? From an imaging standpoint, humans/biologicals tend to notice details not “imagined”. A “good” feedback learning loop might be refining the correctness of the imagined. Using efforts to reduce the number or amount discrepancies between imagined or predicted environment and perceived reality as measured against “beneficial” outcomes. I’m not sure what a Ai wood think of as good/bad in the abstract, in my limited considerations of imagination in humans, I seem most useful in recognizing things that be mor meaning than the general background. As the imagination becomes more able to predict a more “real” outcome, it is more useful to the individual in predicting a more correct outcome and greater usefulness in defining things that deverge form the expected. Sort simplistic I realize, but intelligence might be as simple as refining.the ability to imagine a result and judging the more meaningful of the descrpencies .

Chad Neff
  • 1
  • 1
  • Good/Bad are entirely human concepts, humanity lived for millions of years without those, so there may not be useful words in evaluating... "urge for action", these concepts are a higher level of abstraction – Aleksei Maide Sep 16 '22 at 12:13