3

I'm a little bit stuck:

I implemented an AI with GOAP (Goal oriented Action Planning, http://alumni.media.mit.edu/~jorkin/gdc2006_orkin_jeff_fear.pdf) for a simulation game. That works fine.

Now I want that the agents can cooperate (e.g. doing actions together). What is in this case the best AI-Design that the GoapActions keep loose couplet?

Should they plan together? (what is in this case the "worldstate"?)Or Should they share their plans?

Example
Agent1: Worldstate Agent 1: isLonely= true
Goal Agent1: isLonely = false

Plan Agent1: AskAgent2ToTalk -> TalkToAgent2

Agent2 Worldstate Agent 2: hasWood = false
Goal hasWood = true

Plan Agent2: GetAxe -> ChopWood -> BringWoodToSupply

How I get this constellation?

Agent1 Plan: TalkToAgent2
Agent2 Plan: TalkToAgent1 -> GetAxe -> ChopWood -> BringWoodToSupply

Or if they are talking and one of the agents is interrupted (e.g. by an attacking enemy) the other agent must know that his TalktoAgent2 Action has ended.

quintumnia
  • 1,183
  • 1
  • 10
  • 34
james
  • 116
  • 6
  • what do you mean "How I get this constellation?" have you set up the environment and the actions? – k.c. sayz 'k.c sayz' Aug 19 '17 at 12:11
  • yes, the environment and the actions are set up. (the simulation works with multiple agents but they are not cooperate at the moment) The question is how should agent1 alter agent2's plan, that agent2 has TalkToAgent2 as current Action (if and only if Agent2 like to talk). and after that how agent1 knows that agen2 want to talk with agent1 (has TalkToAgen1 as current Action). I hope my explanation is clear now. – james Aug 19 '17 at 12:57
  • 1
    You shouldn't expect to be able to program explicit some sequences of actions to achieve some goal state. The point of this method of AI programming is such that it "discovers" its own actions based on how the agent is embedded in the environment. If you really want to design a specific sequence of actions, you should fine-tune the environment around it to conduce such behavior. Recall from the paper that cooperative action (flanking) is emergent behavior, not explicitly programmed into the game. tl;dr i think the method of implementation needs to change, but not sure in what way – k.c. sayz 'k.c sayz' Aug 20 '17 at 21:07
  • i agree with you.(see detailed answer) – james Aug 20 '17 at 22:42

1 Answers1

1

This evening I got inspired by this paper: http://www.dphrygian.com/bin/David-Pittman-GOAP-Masters-Thesis.doc

(GOAP paired with the Command and Control Pattern)

What do you think about this solution?

Each goal has a relevance (that depends on the agent needs)

When agent1 working on the "AskAgent2ToTalk" Action, it only sends a goal recommendation to agent2. Explicit means this: The agent only sends a list of modifiers for each goal relevance (in this case it could be a bonus for the "SocialInteraction" Goal. The value depends on the relationship between the agents) and the recommend goal.

if agent2's desired goal is the recommend goal, the agent replans.

james
  • 116
  • 6