Goal chaining, and optimizing.

This week I’ve implemented functions for pre-generating a flow field. It is very slow but it works. I also added some extra features to the map loading, so that you can mark out specific chokepoints the agents should know about. That is to ease up the preasure on A* calculations.

I myself and Simon also managed to remove unnecessary data transfers between the GRAM and RAM, by having OpenGL and OpenCL share memory where you update and store the agent orientations. Gustav helped us to set up the kernel so that the memory could be shared. So now we don’t need to update the orientations with OpenCL, write the data to RAM then save it to a texture which in turn is written to the GRAM again. That will save us some valuable time, however a bug appeared, and agents face the wrong direction in most cases.

You can also assign several goals to agents, which they will follow up in given order. However, goal chaining can not yet be done through real-time interaction. But adding the functionality should not be much of a hassle since the structure and logics are already finished, all you need is a user interface.

A heartbeat functionality has been written which can be used to update some states at set time intervals, instead of every frame – optimization.

Also, when a group needs to calculate a path, only the agent with most leadership within the group will do it and share the result with the rest of the group members.

We’re closing in on deadline and only a few more things remain. Hopefully those can be adressed and implemented/fixed tomorrow.

// Johan Forsell

Be Sociable, Share!