Philippe IROS'11 v8 - Philippe Morignot .fr

Mar 28, 2011 - with an A.I. task planner, to represent these scenarios in a high level language and to ... there are many scenarios which would have to be manually written in real ..... 01/08/2008-03/08/2008, Munich, Germany. [21] V. Vidal, H.
95KB taille 3 téléchargements 363 vues
Generating and executing scenarios for a mobile robot Philippe Morignot, Mariette Soury, Patrick Hède, and Christophe Leroux

Abstract— This paper describes a way to generate scenarios with an A.I. task planner, to represent these scenarios in a high level language and to execute them on a mobile robot. This language, called ISEN, is an event-based finite state automata engine, which links high level states to executable functions controlling a robot. An A.I. task planner, CPT [21], is used to generate scenarios through a mapping between the A.I. planner’s instantiated operators and ISEN’s states. Callback functions, actually performing the prescribed actions on the robot, are described as handlers attached to states. Errors are handled through a specific, always active, state which branches to recovery states, not generated by the A.I. task planner. The implementation controls a mobile robot with an 6 DOF arm and a gripper, in scenarios for assisting handicapped/ageing persons in their apartment.

I. INTRODUCTION

I

MAGINE a mobile robot dedicated to servicing a person in his apartment. Perhaps the person will ask for his pills, which entails for the robot to go to the bathroom, deploy its arm, grasp the pill box, retract the arm, come back to the person’s location and hand the pill box over to the person. Perhaps the person will ask for a fruit juice, which entails for the robot to go to the kitchen, deploy its arm, remove one by one the unsatisfying soda cans from a shelf until a can of orange juice is found, grasp it, retract its arm, come back to the person’s location and hand the can over to him. In both examples, several actions have to be taken by the robot, with different nature and order: These examples are two different scenarios and the same service mobile robot should be able to carry out at least both of them (and all the variations on them) as requested by the person Many robots can carry out a single scenario (e.g., ROLLIN JUSTIN [5], NAO [8], TWENDY ONE [12], HRP-4 [1]). In these systems, a scenario is represented as a piece of source code which calls functions of a programming language for the robot to successively carry out all the prescribed actions. But to change a scenario (e.g., for a new demo), changing this source code is necessary. Other robots use a high level language for representing

Manuscript received March 28, 2011. This work was supported by the DGE of the French Ministry of Economy, Finance and Industry through contract ITEA 2 MIDAS, as part of a EUREKA European project. P. Morignot ([email protected] , phone : +33 (0)1 46 54 97 27, fax : +33 (0)1 46 54 89 90), M. Soury ([email protected]) and C. Leroux ([email protected]) are with CEA, LIST, Interactive Robotics Laboratory, 18, route du panorama, B.P. 6, 92265 Fontenay-auxRoses, France. P. Hède ([email protected]) is with CEA, LIST, Vision and Content Engineering Laboratory, 18, route du panorama, B.P. 6, 92265 Fontenayaux-Roses, France.

scenarios (e.g., CARE-O-BOT [9], among others). In these systems, high level actions can be synthetically encoded in a language, so reprogramming the robot for a new scenario is not necessary: Only the high level actions in the scenario have to be changed. Still other robots’ designers acknowledge the fact that there are many scenarios which would have to be manually written in real applications (not only for demos). And that it is unpractical, if not impossible, to manually write all of them in advance (for that, even a high level language is not sufficient any longer). So these systems generate their own scenarios (e.g., SHAKEY [4], PR2 [23], DALA & LAMA [10]) using an A.I. task planner: Given goals, specified by a user, task planning generates a plan of actions (a sequence of instantiated action descriptions), this plan is considered as a scenario (a high level description of the successive actions to take) and then this scenario is executed by the robot (each action description in the scenario is linked to an executable function in the underlying programming language, and these functions are executed sequentially). But either A.I. planning exhibits too small performances [4], or A.I. planning is not domain-independent [23], or interleaving A.I. planning, replanning and execution leads to complex software architectures [10]. In this paper, we propose a high level language for representing scenarios and a way for an A.I. task planner to generate scenarios for this language. This language, called ISEN (for Interactive Scenarization Engine) represents event-based finite state automata and executes them, eventually in parallel and with priorities. A domainindependent A.I. task planner, CPT (for Constraint Programming Temporal planner [21]), is used for generating scenarios. The A.I. planner, the scenario language and its engine are used for controlling SAM (Smart Autonomous Majordomo), a versatile and adaptable mobile robot with a 6 DOF arm and a gripper, which assists handicapped/ageing persons in their apartment [16]. The paper is organized as follows: section Erreur ! Source du renvoi introuvable. presents the scenario language and the way scenarios can be generated by a task planner. Then, section III details an example from end to end, from goal specification to execution on the real robot. Section IV discusses related approaches, and the last section sums up our contribution.

II. MODEL In this section, we describe the high level language ISEN, capable of representing and executing scenarios, and a way to generate these scenarios through an A.I. task planner (e.g., CPT). A. ISEN: a high level language and an engine The ISEN engine is a virtual machine which reacts to events sent by the application in which it is integrated, and triggers actions that can act on this application. And this, by conforming to scenarios which specify a behavior model. At initialization time, the application must provide to ISEN a library of elementary actions which can be taken by the engine, and a behavior model, or scenario (provided as an XML file). At execution time, the application sends to the engine the events generated either by the application or by the external environment. The engine calls functions from the previous library of actions, as specified by scenarios, to act on the application and on the external environment. A scenario specifies any number of state machines (or agents1), which autonomously react to events by changing states and/or triggering actions. Every event sent by the application is transmitted to all state machines included into the scenario. Only the state machines tailored for reacting to this event actually does --- the other state machines simply ignore it. Programming a set of ISEN state machines hence sums up to: (1) specifying a list of states in which the state machine can fall into, (2) for each previous state, specifying the next state to which to branch (i.e., a transition), and (3) specifying the actions to take for each state transition or event reception. ISEN also represents resources: They can be consumed/released by an automaton. A resource can be preemptive or allocated according to the priority of the requiring automaton among all requiring automata. A scenario is composed of a set of automata (or sequences), a set of agents and a list of global constants (visible by all states of all automata of all agents). An automaton is composed of a set of states and a set of transitions, or state change, which can be activated in the considered state. The target state of a transition can be another state of the current sequence, or the initial state of another sequence. Each agent specifies its initialization sequence and its local constants. Three kinds of actions can be attached to a state: ON_ENTRY actions (noted f1,1 to f1,n1 for state n in Fig. 1), which are activated in sequence just before the current state is activated; ON_DO actions (noted f2,1 to f2,n2 for the same state in the same figure), which are activated in sequence during the current state’s activation; and ON_EXIT actions (noted f3,1 to f3,n3 for the same state in the same figure), which are activated in sequence right after the current state is activated. As a consequence, from the time when a state is 1

This term should not be confused with the term “agent” in the A.I. multiple agent systems community, for example.

about to become active (state n, in green in Fig. 1), the actions are called in the following order: f1,1, …, f1,n1, f2,1, …, f2,n2, f3,1, …, f3,n3.

Fig. 1: Actions and transitions in an ISEN sequence. Actions (noted, fE,1 to fE,ne in Fig. 1) related to an event (noted E in the same figure) can also be attached to a state. When event E is received by the active state, the actions fE,1, …, fE,ne are activated, in this order. And once these eventrelated actions are all activated, the control flow passes from the current state (n) to the state specified by the transition (i.e., state n+1 in Fig. 1) --- the same mechanism applies for the actions of this new state. Any number of events (and transitions) can be attached to a state of an automaton? Therefore an automaton can own any graph structure. Since ISEN interprets XML source code, i.e., text, the user must specify in the source code of the application the link between the text of an action (e.g., “JointMove(90,60,50,80,90, 60)”) and the callback function in the underlying programming language (e.g., the actual function call “JointMove(ArticularPosition)”). ISEN’s automata can be easily designed using the IsenEdit graphical editor [17]. Their execution can be stepped/traced using a remote console debugger. B. Scenario generation with CPT A.I. task planners use operators, describing the actions which can be taken, an initial state and goals, to build a sequence of instantiated operators (a plan). A plan moves step by step a world state from the initial state to a final state that contains the goals [6]. All entities are described in the Planning Domain Definition Language (PDDL) syntax, e.g. version 2.1 [18] in our case. The shortest syntactic element is a fluent i.e., a term which can be positive or negative depending on the time of observation and on the operators’ postconditions (potentially changing the truth value of this fluent) before this observation time. A fluent contains a functor followed by variables or constants, e.g., “(position ?arm ?location)”, “(at SAM kitchen)”. An operator is composed of preconditions (i.e., fluents which must hold in the incoming state for the operator to apply) and postconditions (i.e., fluents the truth value changes when compared to the incoming state). CPT (Constraint Programming Temporal planner [21]) is a fast classical task planner, which turns a planning problem (i.e., the operators list, the initial state, the goals) into a constraint programming problem: Variables represent the start instants of instantiated operators; domains represent the instants, from 0 (first instantiated operator in the plan) to the plan length (last instantiated operator in the plan); constraints

among variables represent time bounds on the operators’ execution, causal links and mutual exclusion. Heuristics are used to estimate the future plan length. Then, finding a plan which solves a planning problem turns out to finding one value of each variable’s domain, which together satisfies all the constraints. CPT’s algorithm searches for a plan with a length given by heuristics. If no solution is found within a given plan length, the plan length is incremented and the search re-starts (therefore CPT’s algorithm is complete). C. Turning plans into scenarios A plan, generated by the task planner, is first parsed and each instantiated operator of this plan is identified to a state in a sequence. Encoding instantiated operators as eventtriggered transitions (this way, ISEN states would be close to planning states) would prevent the robot from receiving action termination events during the execution of an action. So this option, although theoretically appealing, is not appropriate in practice. Low level ISEN functions are used to turn this internal representation into ISEN’s one. For each instantiated operator, the callback functions (the actions of section A) and the triggering event (to jump out of this ISEN state) are read from a handler file, by matching a handler name against the name of an operator. For example, the handler named “position-arm-for-grasping” is associated to the instantiated operator named “position-arm-for-grasping pt-ref rot-ref kitchen pt-kitchen rot-kitchen” --- CPT is canonical (an operator appears only once in a plan) [21], which prevents from matching the same handler to several instantiated operators with this name. Finally, that internal representation is saved into a file, in XML format, describing all the states, transitions and sequences available: this is the generated scenario (a temporary file), which is re-loaded by ISEN for execution on the robot SAM. Errors during the execution of an action are handled by a specific, always active, state, called “all_states”, which is the default event-catcher. When this state is reached, it can branch to either CPT-generated states or additional states (not generated by CPT) dedicated to error recovery. These additional states in the handler file are the ones which do not match against the name of actions in the generated plan --the remaining handlers, e.g., a handler named “FailureState” typically is not part of a plan but anyway maps to an ISEN state to which planned states can branch to for error recovery. III. EXPERIMENTAL RESULTS In this section, we first describe scenario generation by detailing an example. Then we describe scenario execution on a mobile platform. A. Scenarios generation The actions which can be taken by the robot are gathered in the SAM domain: this is a symbolic representation in PDDL of these actions in terms of preconditions and

postconditions (see [19] for a first version of this PDDL domain). The initial state of a PDDL problem describes the robot and its environment. In our experiments, only a unique goal is used, e.g., (at coca-can pt-deliver), meaning that the robot aims at putting a coca can at the location named pt-deliver. Table 1 lists the 20-action plan generated by CPT for this problem with this domain2. This plan is generated in 0.08s on a computer with 4-CPU at 2,66 GHz with 3,37Go RAM. 0: (initialize-arm) [1] 1: (move-arm-to-reference-position pt-random rotrandom pt-ref rot-ref) [1] 2: (close-gripper pt-ref rot-ref) [1] 3: (initialize-sam pt-ref rot-ref) [1] 4: (load-maps) [1] 5: (initialize-object-selection) [1] 6: (open-gripper) [1] 7: (position-arm-for-grasping pt-ref rot-ref kitchen ptkitchen rot-kitchen) [1] 8: (wait-for-object-designation coca-can kitchen pt-kitchen rot-kitchen) [1] 9: (initialize-grasping coca-can) [1] 10: (move-towards-object pt-kitchen rot-kitchen coca-can pt-coca-approach rot-coca-approach) [1] 11: (avoid-obstacle-before-object1 pt-coca-approach rot-coca-approach coca-can kitchen pt-coca-ontology1 rot-coca-ontology1) [1] 12: (avoid-obstacle-before-object2 pt-coca-ontology1 rot-coca-ontology1 coca-can pt-coca-ontology2 rot-coca-ontology2) [1] 13: (blind-grasp pt-coca-ontology2 rot-coca-ontology2 coca-can pt-coca-barrier rot-coca-barrier) [1] 14: (close-gripper-on-object coca-can kitchen pt-coca-barrier rot-coca-barrier) [1] 15: (move-arm-to-safe-position coca-can pt-coca-barrier rot-coca-barrier pt-safe rot-safe) [1] 16: (move-arm-to-transport-position pt-safe rot-safe pt-transport rot-transport) [1] 17: (initialize-dropping coca-can pt-transport rot-transport) [1] 18: (move-arm-to-deliver-position pt-transport rot-transport coca-can pt-deliver rot-deliver) [1] 19: (open-gripper-with-object coca-can pt-deliver rot-deliver) [1] Table 1: An action plan for a pick and place problem. For example, the instantiated operator “position-arm-forgrasping” (7th in Table 1) maps to the state of Table 2. It is composed of 6 ON_ENTRY actions and 1 transition (noted ON_EVENT). The last action is the function call for actually moving the arm, i.e., JointMove(ArticularPosition). The 2 Due to paper space limitations, the PDDL domain, problem and handlers for this example can be found at URL http://philippe.morignot.free.fr/SAM .

transition can be read as: when the event “Manus_ArmCommandFullfilled” is received, the control flow passes on to the state named “wait-for-objectdesignation”, which reversely maps to the 8th instantiated operator of Table 1. The handler associated to this state contains the same information as Table 2, except its full name and transition. DisplayAgentStatus(Brief=true) Speak(Text="We begin the intelligent grasping scenario.") displayMessage(Message="Positionning of the arm for grasping", Type=$HIGH) Index =FindVectorElement(Element= $CurrentPos, Vector=$StationTable) ArmPos = GetVectorElement(Index=$Index, Vector=$GraspTable) ConsoleOutput(P1="position to reach = ", P2=$ArmPos) JointMove(ArticularPosition=$ArmPos) Table 2: ISEN state for the "position-arm-for-grasping" instantiated operator in XML format. B. Scenario execution 1) Hardware The robot SAM (Smart Autonomous Majordomo [18]) is a non-holonomic mobile base ROBULAB 10 with a 6-DOF MANUS arm ending with a gripper (see Fig. 2). Its sensors are forward- and backward-oriented sonars located on the base (for obstacle avoidance), a panoramic camera located on top of the base (for scene detection), 2 webcams located on the arm (for object recognition, distance stereomeasurement, and visual servoing [14]) and an optical barrier located in the gripper (for decision on clamp closure/opening). 2) Software All software components are integrated as web services in a Device Profile Web Services architecture [13]. They include (i) moving the mobile base to a 2D location in the apartment, (ii) moving the mobile base closer to an object to grasp, in case it lays out of range of the arm [2], (iii) visually recognizing a given object in an image given a list of known object images [14] and (iv) determining the successive poses

of the arm depending on the shape and orientation of the object to grasp [22]. 3) Execution The implementation leads to execution of scenarios (e.g., the one derived from Table 1 into ISEN’s state, excerpted in Table 2) on the mobile platform SAM3. Since some actions can fail on the real robot (e.g., visionrelated actions such as “wait-for-object-designation” in Table 1), a scenario is decomposed into smaller parallel sequences (e.g., starting with the name “init-“ in Table 1). This way, an error state (e.g., “FailureState”) can branch to the first state of the small sequence which failed, instead of having to execute the whole sequence again.

Fig. 2: The robot SAM includes a 6 DOF arm and a gripper. IV. RELATED WORK Among the large variety of robotic systems, only few involve a high level language to represent scenarios and an A.I. task planner to generate them. First of all, the robot SHAKEY pioneered the field of domain-independent A.I. task planning, with the STRIPS task planner and the PLANEX execution mechanism [4]. But nowadays A.I. planners, e.g. CPT, are degrees of magnitude faster. The robot CARE-O-BOT uses a high level language, i.e., a module in Python, to encode scenarios using activities linked by discrete, cyclic or wait-for relations [9]. AI planning is performed by a request in a database, which provides possible actions that fulfill a given task. When compared to our approach, CARE-O-BOT’s high level language is subsumed by the ISEN one: The previous high level primitives can be encoded in ISEN, due to the absence of any constraint imposed by ISEN on the graph structure of states. Most importantly, CARE-O-BOT does not include an A.I. task planner per se, hence leading to manually encoding all the possible actions for all the tasks. The robot PR2 uses no specific high level language to represent scenarios, but uses a modified Hierarchical Task Network (HTN) planner (see [6] for an introduction) to generate them [23]. HTN planners represent additional 3

A video of the execution of scenario of Table 1 on the robot is available at the following URL: http://philippe.morignot.free.fr/SAM .

knowledge on tasks (i.e., a sequence of low level subtasks decomposes a high level task) to reduce search complexity. In contrast, the CPT planner is domain-independent (hence does not need extra knowledge on task decomposition) and still performs fast --- it won the Distinguished Performance award at IPC’06 [11] The robots DALA & LAMA [10] use a task planner, IxTeT [15], which is a domain-independent task planner based on constraint programming (CPT is based on the same underlying principle but with a different model). But IxTeT was designed before the first version of PDDL was proposed (1998 [18]). As such, it could not support PDDL syntax for operators and problem description, which all planners do since that time --- IxTeT was not involved in any International Planning Competition [11].

unexpectedly closed, re-plan to look for the key first), by embedding an A.I. task planner as one ON_DO action of an ISEN state dedicated to failure (and therefore re-planning). We also envision a software architecture, in the sense of the latter previous observation, composed of an ontology for structuring perception into a symbolic representation and for building the initial state of a PDDL problem. ACKNOWLEDGMENT The authors thank Vincent Vidal (ONERA, Toulouse) and Christophe Leroy (CEA LIST LSI, Fontenay-aux-roses). REFERENCES [1]

V. CONCLUSION This paper describes a high level language for representing / executing scenarios and a way to generate these scenarios through an A.I. task planner. This language, ISEN, is an event-based finite state automata engine, which links high level actions to executable functions controlling the mobile base, the 6 DOF arm and the gripper of a robot called SAM. An A.I. task planner, CPT, is used to generate scenarios through a mapping between CPT’s instantiated operators and ISEN’s states. Callback functions, actually performing the prescribed actions, are described as handlers attached to states. Errors are handled through a specific, always active, state which branches to recovery states, not generated by the A.I. task planner. The approach is used to control a mobile robot with an arm and a gripper, in scenarios for assisting handicapped/ageing persons in their apartment. This work leads us to the following observations: (1)

(2)

(3)

A.I. task planning requires a symbolic description of the environment and of the robot, i.e., an initial state of a PDDL problem. This is a strong requirement, since for example the whole scientific field of vision may be used for establishing a PDDL pre-condition as simple as (relative-position gripper coca-can 10 20 30) --- meaning that the coca can is located at point (10, 20, 30) in the referential of the gripper. The difficulty of perceiving the environment has even led some researchers to restrict to virtual worlds, e.g. video games [2], in which the position (for example) of objects is obtained by simply accessing a data structure in the computer’s memory, not by analyzing the sensors’ output of a robot. We will not advocate for the CPT task planner itself. Any other task planner (e.g., FF, SATPLAN) would be fine as well, provided that a plan is generated quickly enough in practical cases. We will neither make any claim about architectural issues, e.g., 3-layered [6] or based on functional / decisional levels [10], although the present work can surely be analyzed under these terms.

Finally, the next step in our work is to perform on-line replanning (e.g., when the robot detects that a door is

[2]

[3]

[4]

[5]

[6] [7]

[8]

[9]

[10]

[11] [12]

[13]

[14]

[15]

[16]

K. Akachi, K. Kaneko, N. Kanehira, S. Ota, G. Miyamori, M. Hirata, S. Kajita, F. Kanehiro, “Development of humanoid robot HRP-3P”. 5th IEEE-RAS International Conference on Humanoid Robots, pages 50-55, 2005. O. Bartheye, E. Jacopin, “A real-time PDDL-based planning component for video games”. In Proceedings of 5th Artificial Intelligence for Interactive Digital Entertainment Conference, Stanford, California, 2009, pages 130-135. J. Dumora. “Design of behaviors for a robot assisting handicapped persons” (“Conception de comportements pour un robot d’assistance aux personnes handicapées”). Technical Report, CEA, LIST, DTSI/SRI/08-XXX, August 2008, unpublished. R. Fikes, N. Nilsson, “STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving”, Artificial Intelligence, Vol. 2 (1971), pp 189-208. M. Fuchs, Ch. Borst, P. Robuffo Giordano, A. Baumann, E. Kraemer, J. Langwald, R. Gruber, N. Seitz, G. Plank, K. Kunze, R. Burger, F. Schmidt, T. Wimboeck, G. Hirzinger. “Rollin’ Justin – Design considerations and realization for a humanoid upper body”, in Proceedings opf the IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 2009, pages 4131-4137. E. Gat, “On three-layer architectures”, D. Kortenkazmp et als eds., A.I. and Mobile Robots, AAAI Press, 1998. M. Ghallab, D. Nau, P. Traverso. “Automated planning : theory and practice”. Morgan Kaufmann, Elsevier, San Francisco, 2004, 635 pages. D. Gouaillier, V. Hugel, P. Blazevic, C. Kilner, J. Monceaux, P. Lafourcade, B. Marnier, J. Serre, B. Maisonnier. “Mehatronic design of NAO humanoid”. IEEE International Conference on Robotics and Automation, Kobe, Japan, 2009, pages 769-774. B. Graf, M. Hans,, R. Schraft, “Care-o-bot II development of a next generation robotic home assistant”. Auton. Robots, 16(2), pp, 193— 205, 2004. F. Ingrand, S. Lacroix, S. Lemai, F. Py, « Decisional autonomy of planetary rovers ». Journal of Field Robotics, vol. 24, n° 7, pages 559580, 2007. International Planning Competition, http://ipc.icaps-conference.org/ H.Iwata, S.Sugano: “Design of Human Symbiotic Robot TWENDYONE,” Proc. of IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 580-586, 2009. F. Jammes, A. Mensch, H. Smit. “Service-oriented device communications using the devices profile for web services”, In AINA Work., pages 947–955, Washington, USA, May 2007. M. Joint, P.-A. Moëllic, P. Hède, P. Adam, “HEKA : A general tool for multimedia indexing and research by content”. 16th Annual Symposium Electronic Imaging (SPIE), San Jose 2004, Image Processing, Algorithms and Systems III. P. Laborie, M. Ghallab. “Planning with sharable resource constraints”. International Joint Conference on Artificial Intelligence, Montreal, Canada, 1995, pages 643-1651. C. Leroux, I. Laffont, B. Biard, S. Schmutz, J. - F. Désert, G. Chalubert. « Robot grasping of unknown objects, description and

[17] [18]

[19]

[20]

[21]

[22]

validation of the function with quadriplegic people ». in Proceedings of the 2007 IEEE 10th International Conference on Rehabilitation Robotics. Noordwijk, The Netherlands, 2007. C. Leroy. ISENEdit: User Manual. CEA, LIST, LRI, Technical Report, 2005, unpublished. D. McDermott, M. Ghallab, A. Howe, C. Knoblock, A. Ram, M. Veloso, D. Weld, D. Wilkins. “PDDL – The Planning Domain Definition Language”, http://cs-www.cs.yale.edu/homes/dvm/ P. Morignot, M. Soury, C. Leroux, H. Vorobieva, P. Hède. Generating Scenarios for a Mobile Robot with an Arm. Case study : Assistance for Handicapped Persons. Eleventh International Conference on Control, Automation, Robotics and Vision (ICARCV’10). Singapore, December 2010, P1054. A. Remazeilles, C. Leroux , G. Chalubert, “SAM: a robotic butler for th handicapped people”, 17 IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN - 2008), 01/08/2008-03/08/2008, Munich, Germany. V. Vidal, H. Geffner, “Branching and Pruning: An Optimal Temporal POCL Planner based on Constraint Programming”, Artificial Intelligence 170 (3), pp. 298-335, 2006. H. Vorobieva, C. Leroux, P. Hède, M. Soury, P. Morignot, “Object Recognition and Ontology for Manipulation with an Assistant Robot”, In Proceedings of the Eighth International Conference on Smart Homes and Health Telematics (ICOST’10), L.N.C.S., Aging Friendly Technology for Health and Independence, Springer, Berlin Heidelberg, Germany, vol. 6159, 2010, pages 178-185.

[23] J. Wolfe, B. Marthi, S. Russell, “Combined Task and Motion Planning for Mobile Manipulation”. International Conference on Automated Planning and Scheduling, Toronto, Canada, 2010.