A Flexible Behavioral Planner in Real-Time

Often real-time behavioral planner can not adapt to the change of the ... The author would like to thank designers for their implication in the design of the.
189KB taille 0 téléchargements 388 vues
A Flexible Behavioral Planner in Real-Time Etienne de Sevin University of Paris 8 / INRIA INRIA Paris-Rocquencourt, Mirages, BP 105 78153 Le Chesnay Cedex, France [email protected]

Abstract. Often real-time behavioral planner can not adapt to the change of the environments and be interrupted if it is necessary. Although Hierarchy is needed to obtain complex and proactive behaviors, some functionality should give flexibility to these hierarchies. This paper describes our motivational model of action which integrates a hierarchical and flexible behavioral planner. Keywords: behavioral planning, flexibility, hierarchy, real-time.

1 Introduction To bridge the gap between reflective and embodied cognition and action, efficient action selection architectures should associate hierarchical and reactive systems. Our action selection model for autonomous virtual humans [1] is based on hierarchical classifier systems [2] which can respond to environmental changes rapidly with its external rules and generate situated and coherent behavior plans with its internal rules. Therefore the virtual humans could satisfy their motivations wherever it is. Our model is also based on free flow hierarchy [3] to allow compromise and opportunist behaviors increasing the flexibility of hierarchical systems. The activity is propagated through the hierarchy and no choices are made before the lowest level: the action one. The model has to choose the appropriate behavior at each point in time according to the motivations and environmental information.

2 Behavioral Planner To reach specific locations where the virtual human can satisfy its motivations, behavioral sequences of locomotion actions need to be generated, according to environmental information and internal context of the hierarchical classifier system [1]. However these sequences can be interrupted if one motivation is more urgent to satisfy or by opportunism according to perceptions. In spite of high priority motivations, low priority ones are satisfied according to the parameters of the model. In this test (see Figure 1), the “cleaning” motivation parameter was the highest and the “watering” one was the lowest. H. Prendinger, J. Lester, and M. Ishizuka (Eds.): IVA 2008, LNAI 5208, pp. 488–489, 2008. © Springer-Verlag Berlin Heidelberg 2008

A Flexible Behavioral Planner in Real-Time

489

1 hunger 2 thirst 3 resting 4 toilet 5 sleeping 6 washing 7 cooking 8 cleaning 9 reading 10 communicate 11 exercice 12 watering

Motivations

Fig. 1. Time-sharing of the twelve motivations over 65000 iterations

3 Conclusion The results prove that our behavior planner is robust and flexible because it chooses the good sequence of actions to go to the locations where the virtual human can satisfy its motivations according to the distance, the opportunistic and compromise behaviors [1]. None of the motivations are neglected according to the parameters. We will reuse the model for Embodied Conversational Agents (ECAs) [4].

Acknowledgments This research was supported partly by the Network of Excellence HUMAINE IST507422 and partly by the STREP SEMAINE IST-211486. The author would like to thank designers for their implication in the design of the simulation and Catherine Pelachaud for the helpful advices.

References 1. de Sevin, E.: An Action Selection Architecture for Autonomous Virtual Humans in Persistent Worlds, PhD. Thesis, VRLab EPFL (2006) 2. Donnart, J.Y., Meyer, J.A.: A hierarchical classifier system implementing a motivationally autonomous animat. In: The 3rd Int. Conf. on Simulation of Adaptive Behavior. The MIT Press/Bradford Books (1994) 3. Tyrrell, T.: Computational Mechanisms for Action Selection, in Centre for Cognitive Science. Phd. thesis, University of Edinburgh (1993) 4. André, E., Pelachaud, C.: Interacting with Embodied Conversational Agents. In: Chen, F., Jokinen, K. (eds.) New Trends in Speech Based Interactive Systems. Springer, Heidelberg (2008)