This past two days I’ve been hard at work on the logic for the agent.

As I mentioned earlier there was a need to redo the pathing of the agent. This was the first thing I finished. After that I threw together a quick goal-driven logic system to implement the easiest action, namely hunting the player. It might sound like a big thing, but when you think about it, it’s all about following a target if I see it. If I can’t see it, some other behavior can take over.

It was quite easy to get functional, and by implementing this basic system I had a better idea of how to design the whole creature system. It consist of three major parts, Sensory Systems, a Working memory and logic. The sensory system feeds the working memory with MemoryFacts about the world. For this project only the vision system is implemented, so raw data about what have been seen is used to create a VisualMemoryFact, it saves away the entity seen, the position it was observed at and the velocity it was traveling at. It’s easy to extend with new types of memory facts for say sounds or smells. When a memory fact is added to the working memory, it is compared to the facts already in the memory to detect if the new fact is just an updated version of an already existing memory. I’ve also added StaticFacts as I call them, they’re used for things the agent should always have knowledge about, such as the POM, the entity it’s hunting for and such.

The logic system then uses these memory facts to act upon. In a perfect world, no queries to new data, say the entities current position, should be made from the logic system. However in reality I have to do it at some occasions as the visual data is gathered on the render thread, so there’s a bit of delay between memory updates.

To get it all into Nebula I use a property to act as the “brain” and glue it all together. It contains the working memory for the agent and takes care of translating raw sensor data into specific memory facts.

Logic and refactoring

This week the goal is to finish the logic system for the agent. I had done some basic groundwork where the agent would use a FSM made up of lua scripts. However, due to some limitations on how Nebula handles the scripts (like that there’s no return value from a script function) I had to redo it in a different way.

As everything has to be done in code now, I also decided to change the whole pattern used as the choice to use a FSM where due to using scripts and the whole system would be easier to design in that way. So now I’m using Goal-driven behavior or Goal Oriented Action Planning (something I researched during the initial phase of the project). The gist of it is to use abstract goals. These goals can consist of smaller subgoals, that in turn can have subgoals. This chain keeps going until a small enough goal is encountered that can be performed as a concrete action, like GoToNextPathPoint or PlayAnimation. I got the basics up today, but with it I realized I now have to redo my path planning, because at the moment it’s pretty much impossible to get the logic to tell the agent to follow a new path due to the path planning being so well contained inside its own entity property.

3D Probability Map

These past days I’ve been hard at work with how the probabilistic occupancy map will be checked against the synthetic vision. As it turned out, my initial design where unusable.

My first thought where to render the map at different heights. This is so that the agent wont try to look beneath a box as the node at that level will be flagged as disabled as it is inside another object. The node on the next level is then considered the lowest one and the one used for logic.

The important thing to note is the word “render”. This where cumbersome to get working as I wanted to only get some objects, the nodes, to be rendered with a specific view. At first I tried to toggle the visibility of the objects so they would only be visible when the correct view where processed. This didn’t work, and I think I know why now. I was toggling the visibility of the objects but not the meshnodes the rendersystem used.

When this proved fruitless I my second thought where to put all these invisible entities in another stage, but the more I looked into it the more discourage I got as it would involve a third renderpass.

Then it hit me, I just needed to make another shader material that only the vision shader could/would render. With this I finally got something on the screen but to my dismay it was painfully slow even with instanced rendering, something I had hoped would make this a feasible solution.

So I looked back into the research papers and found the test-point method. Basically it translates a world point into NDC space and checks the depth of the point against the depth-buffer to determine if it is visible. This is fast enough to be used in real-time and it’s very easy to add my imaginary map planes to allow the agent to realize if the lowest node can’t be seen.

So all in all I spent two days to confirm that the culling present in the rendering pipeline wasn’t suitable to be used for visibility confirmation against such a dense grid and learned that I should read more carefully.

3D vision

Today I started implementing the actual vision system for the agent. As I mentioned earlier I will render the scene from the agents point of view and scan over a downsampled buffer to detect what have been seen.

Nebula have an easily extendable render system. There’s some parts needed to get a new add on working as the rendering occurs on another thread but it’s quite straightforward and there’s a lot of finished modules to look at and learn from. I was able to finish the skeleton for the module today so there’s a lot of time left to actually implement its functionality. And one thing I discovered is that there’s a shader available for rendering objects color-coded, so I don’t need to write my own to do that!

Though I didn’t get going with the vision add on until the afternoon as I wanted to get more information about the render system so I had to wait for answers from the people working on the engine. So during the morning I started to implement the logic system. The idea for now is to have a finite state machine where each state is a lua script so it’ll be easy to change and test during development, as scripts can be reloaded without the need to recompile the application. I didn’t fully finish the whole groundwork but I got the most basic components setup so it will be quick to finish when the time comes.

Milestone 2


I haven’t done too much today, as it was milestone two today and the whole afternoon was spent listening to all presentations and also doing my own.

I did get some valuable feedback on my project. Not really on what I’ve done but on what I will do. The goal now is to create the real sight sensory system for the agent that will function in fully 3D and not just on the XZ-plane as it currently does. The reason for fully 3D vision for the agent is so it can decide if it sees the target or not and also be able to look beneath objects.

The plan is to use synthetic vision by which one render a crude representation of the scene from the agents point of view with objects color coded. The problem arise in that one need to scan over the rendered scene to be able to tell what been seen. The tip I got where to downsample the rendered image without blending the colors. This will resolve in a much smaller buffer to scan through at the loss of precision of objects spatial locations. But as I’m only interested in what I’ve seen and not where as I can easily obtain that information in other ways I can get away with this. Synthetic vision will also be a great way to showoff the agents inner workings.

To get the agent to look beneath objects I plan on render the POM at several heights so it will become a 3D grid visually but stay in 2D logically. Each node will contain flags on which level it is visible, so a node can be inside an object at one level but still remain visible and active at the other levels.


I got Recast to behave as I wanted and can now generate a grid structure inside the contour of a nav-mesh. This gives a denser grid more suitable to be used as the occupancy grid while still maintaining the actual shape of the open areas.


The nav-mesh generated by Recast


The grid inside the nav-mesh.

It’s a bit simpler than I initially thought, mostly because the result I got yesterday was due to me not doing the right thing. With a clear head this morning I where able to get it functional. Recast generates a compact heightfield describing the open space, this can be seen as a 2.5D image. Regions can then be painted onto it and in this case it’s only a matter of painting a chessboard. Due to how I paint the grid, I need to go over the mesh two times. The first time I give all walkable areas a region ID. The size of the regions can vary but they are always contained to a grid cell, so a cell can contain one or several regions and a region can’t span across two cells. The second time I merge these smaller regions so they form the whole cell.

Occupancy Grid

Today I’ve mused over the probabilistic occupancy grid. While waiting to get access to Nebula I put some time into examining Detour, the path-finding used in the engine. As the creator of Detour has made the excellent nav-mesh generator Recast it would be a shame not to use it. However, a nav-mesh has too large areas to really be usable for an occupancy grid so I’ve looked into the possibility to generate a more grid-like structure.

This is indeed possible but not that straight forward. So far I’ve managed to generate thin stripes over the different walkable areas. Tomorrow I’ll try to extend them to form an actual grid structure.

Agent design

This past week have been used for researching different alternatives on how to design the AI. I’ve divided the project in to three major parts, namely: search behavior, senses, and logic.

For the search behavior I landed on using a Probabilistic Occupancy Map or POM for short. After seeing the video available at AiGameDev, which pretty much showcase the exact behavior I want for my agent, the decision where quite easy. It doesn’t seem to have been used in large game as of yet, at least I couldn’t find any verification of it. It have however been used in various research projects and as the video shows it is feasible in the type of condition I have, a single agent searching a single target.

For the senses, or sense as vision is the only planned one for now, things haven’t been quite as straight forward. I’ve been torn between two different ways to do it, one I made up myself, where you use a octree for partitioning the open space, and check against the agents view cone/frustum to know what areas that have been seen in 3D. The gain for doing this is that I can divide up the space according to physics meshes, so a table will have open space beneath it. The other way is to use synthetic vision where you render the world from the agents point of view with objects color coded. It isn’t as straight forward on how to get information about what the agent has/hasn’t seen. For now I’ve decided to go with synthetic vision as it doesn’t add another data structure such as the octree and I can easily get all seen objects. To get seen areas, a point can be translated from the view to a 3D coordinate or a POM node. I’m still unsure on how to get the agent to look beneath objects. The idea I got now is to have several points for each node at different heights, disabling points that’s inside objects, and have the agent try to see all points for a node.

Logic, which I first thought would need the most work have turned out to be the simplest. As the agent only has one goal, to search for the player, no advanced goal planner is needed. And as searching for the player and hunting it is practically the same (go to the location with the highest probability) it simplifies things even further. To add a working memory to the agent it is in its simplest form only needed to save away an objects id, position and the current time.

So what I’ve discovered is that the hardest part is getting the senses to work with what the agent expects from the world (the probability map). Things are not as simple as I make them sound but at least I got a plan.