Chrashes and assertions

Quite an eventful day today, but not in a good way. I got some progress in the early morning: I finished up the POM node culling and added confirmation of the target so the agent chases you around. There I found a problem that if the agent gets to close to you the most probable node will outside the view frustum due to it being so close to the ground. So the agent would get stuck in place. I got a solution for this, simply elevate all points to the height of the camera when I do the visibility culling and then use the actual position when it’s time to figure out if a point is beneath something.

Before I could start with that, Nebula decided is was time to assert. It asserted in a really strange place during the render loop. The strange thing is that I didn’t touch any code related to the rendering as I know it. As I tried to fix it I re-exported the game data and then, as I found out later, the skybox decided to stop working and crashing the application. So most of the afternoon was spent to get things to work again. I found out the cause of the crash, the skybox, and was able to fix that with some help. And the assertion disappeared as suddenly as it began so everything works now but I really don’t like it at all. As I don’t know what caused the assertion to fail or what got it to go succeed again I’m a bit anxious I’ll do something that will cause it to fail again.

3D Probability Map

These past days I’ve been hard at work with how the probabilistic occupancy map will be checked against the synthetic vision. As it turned out, my initial design where unusable.

My first thought where to render the map at different heights. This is so that the agent wont try to look beneath a box as the node at that level will be flagged as disabled as it is inside another object. The node on the next level is then considered the lowest one and the one used for logic.

The important thing to note is the word “render”. This where cumbersome to get working as I wanted to only get some objects, the nodes, to be rendered with a specific view. At first I tried to toggle the visibility of the objects so they would only be visible when the correct view where processed. This didn’t work, and I think I know why now. I was toggling the visibility of the objects but not the meshnodes the rendersystem used.

When this proved fruitless I my second thought where to put all these invisible entities in another stage, but the more I looked into it the more discourage I got as it would involve a third renderpass.

Then it hit me, I just needed to make another shader material that only the vision shader could/would render. With this I finally got something on the screen but to my dismay it was painfully slow even with instanced rendering, something I had hoped would make this a feasible solution.

So I looked back into the research papers and found the test-point method. Basically it translates a world point into NDC space and checks the depth of the point against the depth-buffer to determine if it is visible. This is fast enough to be used in real-time and it’s very easy to add my imaginary map planes to allow the agent to realize if the lowest node can’t be seen.

So all in all I spent two days to confirm that the culling present in the rendering pipeline wasn’t suitable to be used for visibility confirmation against such a dense grid and learned that I should read more carefully.

I see you…

Happy times! I got the world to render from the agents point of view.

playerpov

What the player sees, the yellow triangle is the old 2D view cone for the agent.

agentpov

What the agent sees in full 3D!

It was a bit of a headache to get working. I added a camera onto the agent and created a new view that would use the picking shader to render to an off-screen buffer. Through the debugging webinterface for Nebula applications I could see that the buffer where rendered to correctly but the whole screen kept flickering to black and back. After doing some debugging with Gustav, one of the developers of Nebula Trifid, he finally discovered the cause: The back-buffer where swapped after each view had been rendered. It’s fixed now so I can continue working without the danger of getting a seizure.

During the off-time when I couldn’t work on the agents vision, I finished the groundwork for the behavior state machine so it can now load scripts and change between them.

3D vision

Today I started implementing the actual vision system for the agent. As I mentioned earlier I will render the scene from the agents point of view and scan over a downsampled buffer to detect what have been seen.

Nebula have an easily extendable render system. There’s some parts needed to get a new add on working as the rendering occurs on another thread but it’s quite straightforward and there’s a lot of finished modules to look at and learn from. I was able to finish the skeleton for the module today so there’s a lot of time left to actually implement its functionality. And one thing I discovered is that there’s a shader available for rendering objects color-coded, so I don’t need to write my own to do that!

Though I didn’t get going with the vision add on until the afternoon as I wanted to get more information about the render system so I had to wait for answers from the people working on the engine. So during the morning I started to implement the logic system. The idea for now is to have a finite state machine where each state is a lua script so it’ll be easy to change and test during development, as scripts can be reloaded without the need to recompile the application. I didn’t fully finish the whole groundwork but I got the most basic components setup so it will be quick to finish when the time comes.

Milestone 2

So…

I haven’t done too much today, as it was milestone two today and the whole afternoon was spent listening to all presentations and also doing my own.

I did get some valuable feedback on my project. Not really on what I’ve done but on what I will do. The goal now is to create the real sight sensory system for the agent that will function in fully 3D and not just on the XZ-plane as it currently does. The reason for fully 3D vision for the agent is so it can decide if it sees the target or not and also be able to look beneath objects.

The plan is to use synthetic vision by which one render a crude representation of the scene from the agents point of view with objects color coded. The problem arise in that one need to scan over the rendered scene to be able to tell what been seen. The tip I got where to downsample the rendered image without blending the colors. This will resolve in a much smaller buffer to scan through at the loss of precision of objects spatial locations. But as I’m only interested in what I’ve seen and not where as I can easily obtain that information in other ways I can get away with this. Synthetic vision will also be a great way to showoff the agents inner workings.

To get the agent to look beneath objects I plan on render the POM at several heights so it will become a 3D grid visually but stay in 2D logically. Each node will contain flags on which level it is visible, so a node can be inside an object at one level but still remain visible and active at the other levels.

Getting there

I’ve finally gotten the occupancy map to such a state that I could call it finished. This Friday I added a basic 2D viewcone for the agent so I can simulate vision on the grid. This is needed as the POM works with expectations theory and as such there is three possible outcomes for a verification of an expectation, verifiably true, verifiably false and unverifiable. So what I’ve implemented is the impact of negative verification on the grid, ie, we have verified where the target isn’t located, so this will now impact where we should look next. This will later be extended to support real 3D sight for the agent.

iseeyou

Over the weekend I did a lot of refactoring, something I see as crucial for my own sanity as I will be working with this code for quite some time. I also moved the path-finding from DetourCrowd into a custom game entity property in Nebula. I had some problems with actually getting the agent to move, the path-planning worked but the agent stood fixed in place. The problem turned out to be a variable that defaulted to 0 so the agent didn’t gain any velocity.

A living world

Got some progress! By using DetourCrowd I can have several agents moving around in the world, look at him go!

agent_navmesh

I’ve also finished the probability spreading and that was the easy part. Most of the time have been spent to get the UI working. For some reason I can’t get the value of a control that easy as the UI lies on another thread and the application will sometime crash when I try to send synced messages between them.

I also tried to generate my own path on the navmesh, but it started to take too much time with just not a good enough result so I scrapped it and just left it all to Detour. The reason why I want to make my own path-finding is because there might be some problems to override the agent “physics” in detour with the actual physic simulation. I can always go back and try again when a deadline isn’t right around the corner.

And just because I can, behold my Probabilistic Occupancy Map! (Read means higher probability)

pom

Grid Generator

Tiled navmeshes can now be imported into Nebula. I took some time to move my test code I’ve been throwing together in the RecastDemo application into it’s own. It’s still pretty much the demo project but with better naming and a bit cleaner code. The plan is to use this application as the navmesh and occupancy map generator for the remainder of the project. Beside the navmesh loading I started to work with the occupancy grid. So far one can export it and load it into Nebula. I started with debug rendering of the grid in Nebula but wasn’t able to finish it today so that’s the first thing to get working tomorrow. After that the plan is to implement the probability spreading across the grid.

Small steps

I’m getting back to speed with Nebula since it was almost a year since I last worked in it. There’s some old code for Detour and Recast support but most of it is commented out and nonfunctional. Work is being currently put into it to improve the support for navmesh generation and bringing it up to date, sadly for me, it won’t be finished too soon and as such I can’t depend on it so I’ll have to roll my own solution. Luckily for me the need for Recast and Detour isn’t a critical part for my project as it’s more focused on the agents logic so I can get away with a quick and dirty solution… for now at least.

So far I can import big static navmeshes exported from the Recast demo project. I’ll try to get tiled navmeshes to work so I can add better support for dynamic objects. If it takes to much time I have to put it aside so I can focus on the occupancy map and come back to it later down the line.