I finally got my hands dirty with our own fork of Radon Labs version 3 Nebula Device, called Nebula Trifid. For now I got a project skeleton setup so the goal now is to get the grid generation to work in nebula, then it’s of to create a graph for the occypany map and start to implement the spreading.
I got Recast to behave as I wanted and can now generate a grid structure inside the contour of a nav-mesh. This gives a denser grid more suitable to be used as the occupancy grid while still maintaining the actual shape of the open areas.
It’s a bit simpler than I initially thought, mostly because the result I got yesterday was due to me not doing the right thing. With a clear head this morning I where able to get it functional. Recast generates a compact heightfield describing the open space, this can be seen as a 2.5D image. Regions can then be painted onto it and in this case it’s only a matter of painting a chessboard. Due to how I paint the grid, I need to go over the mesh two times. The first time I give all walkable areas a region ID. The size of the regions can vary but they are always contained to a grid cell, so a cell can contain one or several regions and a region can’t span across two cells. The second time I merge these smaller regions so they form the whole cell.
Today I’ve mused over the probabilistic occupancy grid. While waiting to get access to Nebula I put some time into examining Detour, the path-finding used in the engine. As the creator of Detour has made the excellent nav-mesh generator Recast it would be a shame not to use it. However, a nav-mesh has too large areas to really be usable for an occupancy grid so I’ve looked into the possibility to generate a more grid-like structure.
This is indeed possible but not that straight forward. So far I’ve managed to generate thin stripes over the different walkable areas. Tomorrow I’ll try to extend them to form an actual grid structure.
This past week have been used for researching different alternatives on how to design the AI. I’ve divided the project in to three major parts, namely: search behavior, senses, and logic.
For the search behavior I landed on using a Probabilistic Occupancy Map or POM for short. After seeing the video available at AiGameDev, which pretty much showcase the exact behavior I want for my agent, the decision where quite easy. It doesn’t seem to have been used in large game as of yet, at least I couldn’t find any verification of it. It have however been used in various research projects and as the video shows it is feasible in the type of condition I have, a single agent searching a single target.
For the senses, or sense as vision is the only planned one for now, things haven’t been quite as straight forward. I’ve been torn between two different ways to do it, one I made up myself, where you use a octree for partitioning the open space, and check against the agents view cone/frustum to know what areas that have been seen in 3D. The gain for doing this is that I can divide up the space according to physics meshes, so a table will have open space beneath it. The other way is to use synthetic vision where you render the world from the agents point of view with objects color coded. It isn’t as straight forward on how to get information about what the agent has/hasn’t seen. For now I’ve decided to go with synthetic vision as it doesn’t add another data structure such as the octree and I can easily get all seen objects. To get seen areas, a point can be translated from the view to a 3D coordinate or a POM node. I’m still unsure on how to get the agent to look beneath objects. The idea I got now is to have several points for each node at different heights, disabling points that’s inside objects, and have the agent try to see all points for a node.
Logic, which I first thought would need the most work have turned out to be the simplest. As the agent only has one goal, to search for the player, no advanced goal planner is needed. And as searching for the player and hunting it is practically the same (go to the location with the highest probability) it simplifies things even further. To add a working memory to the agent it is in its simplest form only needed to save away an objects id, position and the current time.
So what I’ve discovered is that the hardest part is getting the senses to work with what the agent expects from the world (the probability map). Things are not as simple as I make them sound but at least I got a plan.
I see you found your way.
This blog is for my specialization project for my studies at Luleå University of Technology. The area of interest I’ve chosen is artificial intelligence, in particular aimed for stealth-based games.
As the title suggest, the main focus will be how to implement a realistic search behavior in 3D space for an AI. This is so you can hide beneath tables and other objects and have the AI actually consider these locations instead of just looking behind them.
If you want to know more about me, go to the “About Me” page, or simply click here.
For more information about the project, The Project page should suffice.