Marines Formation Simulation - Nicolas Monneret

The group leader question. ..... this function the agent could ask its body to perceive in the environment the surrounding objects with the perceive method, and it ...
264KB taille 1 téléchargements 337 vues
Nicolas Monneret Alexandre Haffner Olivier Kleparski

Marines Formation Simulation VI51: Virtual life simulation

Tutors: Fabrice Lauri Stéphane Galland

REQUIREMENT ANALYSIS ............................................................................................................................. 3 INTRODUCTION ................................................................................................................................................... 3 PROJECT GENERAL DESCRIPTION ......................................................................................................................... 3 GROUP CREATION ............................................................................................................................................... 3 KIND OF FORMATIONS ......................................................................................................................................... 4 TEAM MOVES ...................................................................................................................................................... 4 PATH FINDING ..................................................................................................................................................... 5 SUMMARY ........................................................................................................................................................... 5 DESIGN ................................................................................................................................................................. 5 SOFTWARE ARCHITECTURE DESCRIPTION ............................................................................................................ 5 DESIGN CHOICES ................................................................................................................................................. 7 Simulation platform architecture................................................................................................................... 7 General software architecture....................................................................................................................... 9 Formations architecture .............................................................................................................................. 11 Behaviors architecture ................................................................................................................................ 16 AStar algorithm architecture....................................................................................................................... 18 GUI CHOICES .................................................................................................................................................... 19 Graphical user interface overview .............................................................................................................. 19 User interactions management .................................................................................................................... 19 The group leader question........................................................................................................................... 20 USED TECHNIQUES ............................................................................................................................................ 21 Steering algorithms ..................................................................................................................................... 21 Arrive behavior......................................................................................................................................................... 23 Align behavior.......................................................................................................................................................... 23 Other behaviors ........................................................................................................................................................ 24

Repulsion behaviors .................................................................................................................................... 24 Collision management ................................................................................................................................. 25 Collision between two agents ................................................................................................................................... 25 Collision between the agent and a static object ........................................................................................................ 25 Traversable objects................................................................................................................................................... 26

ENCOUNTERED PROBLEMS ................................................................................................................................ 27 Implementing steering repulsion algorithms ............................................................................................... 27 Finding an accurate way to mix behaviors.................................................................................................. 27 BENCHMARK ..................................................................................................................................................... 27 Performance overview................................................................................................................................. 27 Real time limit.............................................................................................................................................. 27 Further optimization.................................................................................................................................... 28 USER GUIDE...................................................................................................................................................... 29 INSTALLING THE SPECIAL DELTA FORCE DEFENSE SIMULATION ...................................................................... 29 HOW TO CREATE AN ENVIRONMENT? ................................................................................................................ 30 Adding static environment objects............................................................................................................... 30 Adding agents to the simulation .................................................................................................................. 30 Remove object from the simulation.............................................................................................................. 31 HOW TO SIMULATE A FORMATION MOVEMENT? ................................................................................................ 31 CONCLUSION.................................................................................................................................................... 32 TACKLED ELEMENTS ......................................................................................................................................... 32 PERFORMANCES ................................................................................................................................................ 32 CRITICAL ANALYSIS .......................................................................................................................................... 32

2

Requirement analysis Introduction This document presents all the features that will be developed for the VI51 (Virtual Life Simulation) project by our group composed of three members: Haffner Alexandre, Monneret Nicolas and Kleparski Olivier. We will develop this application in Java.

Project general description This project will present the coordinated movement of a team of marines. The user will be able to choose the type of the formation he wants (V, line, cross, circle). Each group will be composed of a leader and an undefined number of followers. The leader is marked ‘1’ on the picture below.

V formation

Line formation

Cross formation

Group creation Team of marines will be created dynamically depending on the user choices: -

Choose the members of the new team (the numbers of members is unlimited) Choose the leader of the team Choose the kind of formation (V, cross, line, circle ...)

3

Kind of formations We want to let the user the ability to choose a team shape among many kinds: -

V X Line Circle Multi-lines

The formation is responsible for calculating the position of every follower, depending on some parameters the user could specify. The leader gets the first position and the other members get a relative position to the leader. We will simplify a group action by the leader action.

Team moves When the user selects a team and selects a target, only the leader compute the way to reach the target point. The leader will try to reach each key points of his computed path, and the other members of the team will try to reach their relative position to their leader position. In order to avoid collisions with the environment (agents, walls, enemies ...), each character will be repulsed by all perceived objects and will also be attracted by its target (key point or ideal position from the leader).

4

Path finding We want the agents to evolve in a virtual world containing obstacles. Thus we have to compute a path for an agent between its current position and its target. We will use the A* algorithm. This algorithm needs the start position, the end position, and finally the whole graph of the environment. The start position is given by the position of the team leader; the target will be given by the user mouse click.

Summary This project will show different kinds of algorithms to simulate: -

Movement behaviors (arrive, seek) Team coordinated movements (repulse + arrive / seek / path finding + formation) Formation structure (V, cross, line) Path finding (A*) Graphical interface to creates teams, apply formations and so on Graphical display

Design Software architecture description We decided to follow as much as possible the architecture we were given during the courses because it is logical, flexible, and easily extensible. Instead of developing in one block our piece of software, we preferred to design first a generic simulation platform that would support any kind of simple simulation. It’s made of 3 main bits: the simulator itself, that is to say the management of the simulation and its main loop, the environment which describes what the world is made of and the agent bodies evolving inside, and the agent part regrouping the intelligence and the movement behaviors of all agents in the simulation. The basic principle is the following: -

The simulator runs all agents live ( ) methods. The agent asks its body to perceive the environment. The agent thinks and decides a behavior to apply. The movement intention is sent to the environment via the agent body. The environment resolves all intentions (also called influences) and decides the place where the agents’ bodies will be at the next simulation step.

5

Here is what the basic architecture looks like:

By overriding and implementing virtual methods of our generic simulation platform we built our formation simulation architecture upon it. The graphical user interface is strictly separated from the simulator, which is actually running in a separate thread. The GUI extracts information from the simulator at a certain time and smoothes the movements of the agent bodies.

6

Design choices Simulation platform architecture In this part we will see the design we chose for our base simulation platform. It was meant to be as generic as possible in order to be extended to other kind of simulation pretty easily. It is mainly composed of a set of abstract classes that will be extended to get a specific architecture. The main simulator loop is implemented here, and manages the cycles for every agent, and asks environment to resolve the influences.

7

The class diagram above is a simplified version of what we actually implemented. Let’s sum up what it is made of. The central component of this architecture is the simulator. This simulator has a bunch of properties and runs its main simulation loop inside a thread. It sometimes calls functions of the event listener, at startup and when a simulation loop or cycle is finished. The simulator is made of a set of agents, and a unique environment in which the agents evolve. Every agent builds its body, called agent body up there. At every simulation loop the simulator calls the abstract function live of the agent. This function is where we will want to put the customized comportment of our agents later on. In this function the agent could ask its body to perceive in the environment the surrounding objects with the perceive method, and it will produce some intentions depending on the perception and post them in the environment via its body and its method influence. At the end of the simulation loop, the environment contains a set of influences. The simulator calls the abstract method applyInfluences where we will customize the way we want the influences to be applied on the environment objects. By applying the influences at the end of the simulation loop, that is to say when all influences have been posted, we prevent an agent to modify instantly the environment, thus disabling some possible actions for the other agents. When we apply them at the end of the loop we may decide how to solve conflicts. Here is a simple sequence diagram describing one simulator loop:

8

General software architecture As I said before, we based our formation of marine simulation upon the preceding architecture. The main goal was to benefit of a simple and organized structure to build our application without doing any critical error from a life simulation point of view, for example by modifying the environment from the agent class… Here is what our marine simulation looks like:

9

In this diagram we can see how we extended the classes from the generic architecture. What we basically did is situating the agent by creating the Character2dBody which contains the characteristic we could apply on an entity evolving in a 2d environment. The marine body class is almost empty, but we decided to keep this system in order to be able to add entities later on, such as enemies, animals… which would also be situated. The same kind of design applies to the static environment objects, which are objects that cannot move, mostly environment objects. We implemented as well all the functions needed by the simulator to run properly, that is to say the live method, which is the thinking of the marine, and the applyInfluences method solving all intentions, as well as the customized perceive method, filtered by the marine body characteristics such as the perception distance. As you can see we also added some function we will explain later, that have something to do with path calculation for the leader of a marine formation, using the A* algorithm.

10

Formations architecture In this part we will see how we implemented the formation system used by the agents to coordinate their move. We tried to design again something extensible. By extensible I mean that we wanted to be able to add a new type of formation very easily, by just implementing a few methods.

As you can see, we built on our preceding simulation architecture, this formation management. As any marine can belong to a formation this class implements the FormationMember interface, thus giving some basic information about its status. The other two important classes are FormationSpot and FormationBody. The first one corresponds to the place a formation member should try to reach in order to be at the right place in the formation. The second one is the formation itself, that is to say all 11

the data describing the formation, like spacing between members, maximum number of members, angles… The two fundamental methods to implement are computeLocation and computeOrientation. Given a local data, or a data from the formation body, these functions will give the position and orientation to reach to be at the right place. This position will most of the time depend on the position of the leader of the formation. In the behaviors we will see later on we decided that a member should only try to reach the formation -orientation- when he is near from the final spot. This means that he will first try to rush his spot, looking where he is going, and then take place in the formation as well as getting to his final orientation.

We defined a bunch of different formation to let the user select the most appropriate given a certain environment. We actually didn’t focused too much on the agents AI as it was not our main goal, thus we let the human select the formation, as he would do in a game. As you can see in the previous class diagram we prepared 5 different kind of formation. None of these formations have a limited number of members, which mean that they are infinitely extensible in size. We will of course be limited by the number of entities moving in the environment by our machine. This subject will be discussed later on. Here are the 5 formations available: -

O formation

In this formation, all members are moving in a circle shape. The leader is always facing the movement direction and all other members adopt a defensive position looking all around the circle.

12

-

V formation

With this formation the members will adopt a V shaped formation. This is more of a progression formation than a defensive one as they’re all looking in the same direction than their leader. The leader is the first member of the formation. -

X formation

This time the formation is a composition of two V formations to form an X shape. All members are still looking in the same direction than their leader when they’re near their spot. The leader is placed at the center of the formation.

13

-

Horizontal line formation

In this formation the members will move in a horizontal line formation. This means that they’ll all be on a line which is perpendicular to the movement direction. The leader is at the middle of the formation. -

Vertical line formation

This formation is still a straight line, but this time all members are moving along the direction line. The leader is the first member of the line.

14

As we will see later on, all movement are implemented using a steering algorithm, thus giving some curvature to the formation trajectory when turning or moving in a crowded environment.

15

Behaviors architecture More than choosing the right place where a formation member should, we have to give them the ability to move to this place. This is done by implementing many movement behaviors. In this part we will see the structure we adopted to develop these behaviors, and we will see later on, in the steering algorithms part, how they work. Here is the class diagram:

As you can see on the diagram above, the agents running in the simulation create and run behaviors depending on what they think is better to do. In our particular case the leader will run a special behavior we called path follow behavior, and the formation members will run a formation member behavior. These behaviors enable an agent to evolve in the environment. Behaviors outputs are transferred to the environment via the agent body under an influence form.

16

One of the main tasks of our project was to build proper behaviors that would let the agents to move in a coordinate and coherent way inside the environment. As you can see the behavior class contains a run method which actually corresponds to the calculation of a linear acceleration and an angular acceleration given variable parameters depending on the type of behavior. This last point is precisely why you will not find any generic behavior class in our project as we were not able to factorize this function. Instead you will find around 10 behaviors, all having their own run function that will be called inside the live method of the agent. We will see later what they’re doing, but let just now enumerate them briefly: -

arrive behavior align behavior

These two are the basic ones, all the following are mostly based on these two behaviors. -

collision avoidance behavior face behavior formation member behavior obstacle avoidance behavior path follow behavior seek behavior wandering behavior

All of these behaviors are steering behaviors. If an agent uses multiple behaviors it could mix their outputs using the BehaviorOutputMixer, giving them different priorities.

17

AStar algorithm architecture To permit the leader of the formation to find his path, we implemented an A* algorithm. The A* algorithm calculates the path between two points in a graph using a heuristic which estimate the best way to reach the target. A package AStar has been created with two interfaces: -

The “AStarCell” interface which describes the graph the algorithm will be applied on. The “Heuristic” interface which defines the heuristic used to estimate the best way to reach the target.

In order to run the A* algorithm, we had to make a graph representing the environment. The solution we found was to discretize it to create a grid that contains cells. For the project, we implemented two heuristics: -

A Manhattan distance heuristic which gives the distance between two points measured along axes. A Euclidean distance heuristic which gives the straight line distance between two points.

18

GUI choices Graphical user interface overview Even if the graphical user interface was an optional component of the project, we found it to be definitely indispensable to give the user a way to visualize clearly what’s going on. Formations are definitely something visual. What we basically did is to run the simulation in a separate thread and let the graphical interface pick up the environment state at a given interval of time in order to render it. We managed to apply a simple smoothing algorithm using interpolations to have continuous movements of the entities running in the simulation. The powerful thing here is that we may want to run the simulator loop 10 times per second and still obtain a frame rate of 50 fps for example. We are of course limited in some ways by the machine. All the implementation of the graphical interface lies in the GUI class in the project. You’ll have a visual description of it in the user guide at the end of this report. The graphical interface has to main jobs. The first one is to render the environment, and this is done by cloning temporarily the environment inside the GUI thread. We kept the graphics very simple; boxes for the static objects, like cliffs, swamps…, and circle for the marines. Apart from avoiding any concurrency issue, this is definitely the easy part of it. The second thing to do is handling user events. We have to give him some kind of immediate visual feedback, and an easy way to manage the formations.

User interactions management We will go a bit deeper with the user interactions as it’s not that obvious. We of course did it the java way, which means using event listeners and all that stuff. Handling user event is pretty straight forward, but the problems come when we want to act on the simulator from the GUI thread, which is of course the one handling user actions. Let’s imagine that we select an area on the map. What we want to do is select all the marines inside of this area. When we release the mouse click the handler inside the GUI thread is called. Up to this point we need to look for the agent that lies under this area, and change their state. As we cannot do this directly we set up a system to post messages in the simulator using the function postAction. As this function is synchronized you can call it from anywhere and post a runnable containing what you want to do inside the simulator thread. All the actions are stored and run at the end of the current simulation loop. Using this technique we are able to select agents, change their direction, change their path, their formation… and actually everything inside the environment, and very easily.

19

From a user point of view, we tried to keep the interface as simple as possible, for two reasons: first we didn’t want to spend too much time on it, and second we wanted to be intuitive.

The group leader question Managing the formation was a great question we had. We had two options. The first one was to let the user create an empty formation, then adding some members and a leader that would be chose one by one. The other option was to get as close to a RTS game as possible, thus giving less possibilities to the user but handling the entire formation creation job for him. We chose the second option, simply because we are Starcraft lovers. By handling the group creation task we hugely simplified the user life. We the software does are the following things: -

When a group of marine in selected via an extended selection, a leader is randomly chosen. The group directly adopts the current formation selected in a drop down list.

These are the easy parts. But we also handle the handy stuff: -

-

If part of the formation is split by making another selection, both formations adapt. This means that there will be no gaps in the formations, they’re basically reorganized and resized. If a formation loose its leader, because of splitting, then a new leader will be chose for the rest of the formation and he will be given the path of the previous leader.

We found this interesting enough to implement it and let the user experiment this system.

20

Used techniques Steering algorithms The simulation manages steering, thus, an agent and particularly its body, must have a position and a speed. In reality, bodies have: -

A position An orientation A linear speed An angular speed

To move the bodies, the environment applies the linear speed vector to the position and the angular speed to the orientation.

The arrow represents the linear speed vector. It permits to move the body from a position to another.

The arrow represents the angular speed value. Added to the orientation, it permits to rotate the body. In the aim to simulate steering, we simplified the acceleration physic model. We chose to represent the linear acceleration value by a vector. This vector will be added to the linear speed vector to increase or decrease it.

21

Identically, we add an angular acceleration value to the bodies’ angular speed to change orientation in a steering way.

The linear acceleration vector is added to the linear speed vector to increase it.

The angular acceleration is added to the angular speed to increase it. So, a steering behavior output is simply defined by: -

A linear acceleration An angular acceleration

The environment gets agents desired accelerations and simply adds them to the bodies’ speeds. Thus, speed will increase or decrease according to the value of acceleration. Of course, bodies must have a maximum linear and angular speed values to avoid infinite increment of speed. And for the same reason, acceleration should also have a maximum value. If we set a speed to a body, it will check if this speed value is greater than his maximum speed. If it is greater, it will scale it to his maximum value. So speed will never exceed the maximum allowed for a body. In order to move the bodies, we implemented several agents’ behaviors which calculate the linear and angular accelerations.

22

The two main behaviors are: -

Arrive behavior which permits the agent to move its body to a target point in a steering way. Align behavior permits the agent to orient its body in a steering way.

Arrive behavior In this algorithm, we defined two circles around the target point: a stop circle and a slow circle.

If the body enters in the stop circle, it is near the target, so we stop it. If it enters in the slow perimeter, we decrease the linear speed vector by giving to the body an acceleration vector in the opposite direction. We calculate the speed we have to reach. If the body is inside the slow perimeter, the wanted speed is the maximum speed decreased by a value found by interpolation from the perimeter to the centre of the circle. This method permits to decrease linearly the linear speed of the body from the slow perimeter until the target.

Align behavior Align behavior permit to decrease the body’s angular speed when it reach a target orientation. We set a stop angle and a slow angle.

23

If the orientation of the body enters in the stop angle, we stop it. If the orientation enters in the slow angle, we decrease the angular speed value. To do that, we add to the speed a negative value of acceleration. This value is computed in the same way as Arrive behavior, by interpolation from the slow angle to the target orientation. So, the bodies angular speed value decreases until the orientation reaches the stop angle.

Other behaviors Other behaviors which extend the two previous are: -

Path follow behavior allow the leader to follow a calculate path. In this algorithm, we discretize the path and use the Arrive behavior on each point of it. When the agent reaches a target, the following point is set as parameter of the Arrive behavior.

-

Face behavior orient a body in the direction of a target point. Here we use the Align behavior with a target orientation which is the direction of the point the agent want to reach.

-

Formation member behavior permits the agent to move to their formation location. We use the Arrive behavior on a target which is the place of the body in the formation.

Repulsion behaviors We wanted to prevent the agents from being in collision. This is done in two distinct steps. The first one takes place in the mind of the marine, it corresponds to the prediction of the collision he could make. The second one is managed by the environment, which will disallow impossible moves, in case an agent wasn’t able to avoid an object, in a crowded environment for example. We did the first one by implementing two repulsion behaviors. These are named collision avoidance behavior and obstacle avoidance behavior. The first one is the avoidance of the other agents, and the second is the avoidance of the environment objects. Both are split because we used two distinct techniques. To avoid another agent we are estimating when the collision will probably occur, by just predicting its move. We can then act to avoid it. We solved a linear equation giving the amount of time before the collision if every agent continues at the same speed in the same direction. For the case of environment objects, we used the fact that they are axis aligned bounding boxes to “reflect” our linear speed direction on it, and stick to the box, given a certain minimum shift.

24

Collision management The collision management is done by the environment model. In the “applyInfluence” method, the environment calculates the next position of each agent and moves it to this position. When the environment has to apply the influences provided by the agents, it must stop them when they collide with a wall or another agent. Here, we separate the process into two parts: collision with static object like walls, and collisions with dynamic objects (other agents).

Collision between two agents The environment calculates the next supposed position of the agent body. For each other agent in the world model, we check if we are in collision. Agent’s bodies are represented by circles in the simulation, that’s why the collision test is a circle-circle type intersection calculation. If two agents are supposed to be in collision on the next step of simulator, we must update the speed vector in order to avoid the intersection between bodies. First, we set a vector between the two bodies. Then, we rotate this vector with an angle of 90 degrees in the direction of the body’s linear speed. This new vector gives us the direction of the future linear speed of the agent.

(In black the linear speed vectors)

If we scale the new vector with the value of the linear speed, we obtain the new linear speed vector.

Collision between the agent and a static object If the next calculate position of the agent intersect a static object, we must stop the agent. The agent’s bodies are represented by circles and the static objects of the environment are simplified as their axis aligned bounding box. To simplify the intersection test, we assume that bodies are square objects.

25

Here, if an agent is in collision with an object (wall), it must slide along this object. First, we test the position of the body around the object box using the vector between the centre of the box and the centre of the body. We had divided the box into four spaces: top left corner, top right corner, bottom left corner and bottom right corner. Then, the box-to-body vector is moved in the adapted corner of the body depending on the body’s position. Finally, using the vector angle, we can find the box’s border along which the body will slide. Knowing this border, we suppress the good component of the linear speed vector.

If the body is at the left (-pi/4 < alpha < pi/4) or right (alpha < -3pi/4 or alpha > 3pi/4) of the box, the x-component of the linear speed vector is suppressed; else the ycomponent is suppressed.

Traversable objects Some static objects of the environment are traversable. These objects can be forest, swamp, etc. If some agent’s bodies enter in collision with this kind of objects, the environment didn’t stop them. It decreases their maximum linear speed value according to the object’s slowing rate in order to slow the bodies.

26

Encountered problems Implementing steering repulsion algorithms Implementing steering repulsion algorithm isn’t an easy task at all. The major problem is that it is very hard to stop an object from moving, because it depends on its speed, and eventually its weight… While using steering algorithms the only behavior output values we are able to use are acceleration values. However this has a huge benefit, the moves becomes a lot more realistic! Nobody can stop instantly, and that’s what is reproduced by steering algorithms. In our precise case, the formation of marine, the big issue is that the agents are often evolving in a crowded environment. They often are just next to the others, thus we have big troubles finding an accurate value between evading the others, and staying in formation…

Finding an accurate way to mix behaviors This is related to the preceding problem. Behaviors output are complicated to mix properly because it’s all a matter of coefficient. We were not able to find a satisfying mixture for our repulsion algorithms. That’s why the collision management of the environment is indispensable.

Benchmark Performance overview The simulation works really in a real time when we use less than twenty marines. When there are more than twenty marines the simulation could also be in a real time if we just move groups which contain less than 10 marines. There is a critical action that affects directly the performances when there are many marines: the selection. When a lot of marines are moving, if you selects other marines the frame rate decreases quickly because our selection algorithm has a complexity of O(n).

Real time limit Without optimization structures the real time is really limited and could be used just for small simulations.

27

Further optimization In order to limit slowdowns and keep a good frame rate we can implement optimization structures to our application to limit calculations. -

We can create a Binary Space Partition tree which will contain our statics objects (cliffs, woods and swamps). It will limit the collision avoidance computation time done by the environment, and the computation time for the agent perception also done by the environment, and the picking. Most of the O(n²) complexity could be reduce to O(n) or O(log n).

-

We can also create a quad tree with the Icosep tree heuristic which will contain all the marines’ positions; it will improve the computation time of marines’ perception, the selection of groups.

28

User guide Installing the Special Delta Force Defense Simulation To install the software on your computer you must have installed the Java virtual machine otherwise download Java (http://www.java.com/en/download/manual.jsp) and install it. If you are using a Windows operating system, you just have to double click on the “SpecialDeltaForceDefense.jar” jar file. In other case if you are using a Linux operating system, you must open a command line window and write the command: java -jar “SpecialDeltaForceDefense .jar”. Be sure that the vecmath library jar is in the same folder than the application one. The application composed of two parts: -

The main part is the simulation environment which contains agents and statics objects. The menu part is situated beneath the simulation environment part.

There are two modes available: -

The edition mode which permits to remove all elements or add agents and statics objects to the simulation environment. The selection mode which permits to simulate a formation movement of selected agents.

Each static environment object is represented by a rectangle filled by a specific color and has specifics effects on agents:

29

How to create an environment? Adding static environment objects In this part we will explain how to add static environment objects to the simulation. When you are in the edition mode you should have the same menu as displayed bellow.

You could select the type of the static object to add on the environment through the List box. Then you just need to click and drag in the environment, when the rectangle is right placed release the mouse button and the object is added to the environment.

Remark: if you try to add a cliff on an agent, the cliff will be not added to the environment.

Adding agents to the simulation You can add new agents to the simulation. You just have to do a simple click on the environment.

Remark: you can't add an agent over a cliff. 30

Remove object from the simulation You can erase all elements by clicking on the reset button of the edition menu.

How to simulate a formation movement? In order to simulate formation movement you need to quit the edition mode by clicking on the Go to selection mode button. You should see the menu shown bellow.

You can choose a formation type through the list box. You need to select agents as the same way used in RTS games, and how to add static objects. You will click on environment keep the button pressed, and then drag the mouse, a rectangle will appear. When the rectangle covers agents that you want to be selected, release the mouse button. Agents are selected, they are displayed in bold.

When agents are selected you just have to click somewhere (not inside a cliff) to move agents. They will move to the clicked target in the selected formation.

31

Conclusion Tackled elements In conclusion this project permits to us to discover how to coordinate movements of agents in a simulation environment and to implement it from scratch. We learn a lot about the structure of a well structured simulation platform. We saw also agent behaviors, the computation of the approximate best path from one point to another with the A* algorithm and it's associated heuristics such as Manhattan and Euclidean, and especially about formation structures and associated behaviors to coordinate movements

Performances With this project we can now understand the importance of optimization structures, because we could see that a little platform which simulates the coordination of agents’ movements is really limited without these structures.

Critical analysis Our Marines formations simulator follows the principles of virtual life simulations and it illustrates pretty well the content of the VI51 course. As mentioned in the subject, it contains an environment model which applies several agent behaviors in order to move the Marines in formation. We implement five types of formations which can be chosen by user. User can define environments objects such as cliffs or forest and send the formation on a target point. All these facts allow us to say that the major piece of the subject is accomplished. Yet, when we run the application, we can see that only the leader of the formation is really “clever”, the other Marines only follow him and often stay stuck behind a wall. On this point, our application didn’t really simulate a real human behavior. It could be improved by giving to the followers a path finding behavior for example, but it will also lose performance, so we had to make compromises. Thanks for reading.

32