Last day

Today was the last day I could work on the project. I’ve added a table to the test level. Initially I had problems with using concave collision meshes as they would fall through the floor. A fellow student then asked me why I didn’t just build a table in the level out of boxes? So I did, so now it looks a lot better showing off the difference between solid boxes and a table. The only bad thing is that I can’t move the table around as it will fall apart but one can’t have everything.

Other than that I’ve finished the report, the post mortem and the presentation.

One day to go!

Tomorrow will effectively be the last day I can work on the project. Friday will be occupied by every ones presentation.

Not much in actual progress where made. As I mentioned earlier, I try to keep myself from changing to much so I won’t break something. What I did fix was that the agent would get stuck in the “bend down” animation and start gliding around the floor when it would hunt the player again.

Otherwise most of the time has been spent on finishing the report, writing the post mortem and writing and setting up for the presentation at Friday.

Bugfixes and performance issues

This week is the last week that I can work on the project. The following week is for the graduation show preparations only. So the plan for this week is to only work on bugfixes and tweak the behavior and keep myself from adding cool new features or a whole new behavior goal.

Last Thursday I finished the peripheral and direct vision. So all data collected from those view cones is finally handled.

At Friday I started to do some more serious testing. I found a quite serious performance hit after I added several boxes to the level. When the agent or the player got near them the FPS would drop down to one! The problem was found in the navigation blocker property. When an actor got close, the boxes physics would exit sleep mode. A box would then start to send messages with it’s new transform. When the navigation blocker got this message it would update the mesh, even if the box hadn’t moved at all! It was a bit cumbersome to figure out what went wrong but luckily it was very easy to fix.

Today I think I fixed a bug where the agent would rotate back and forth when it discovered the player. I don’t really know the cause of the failure, but I saw that the rotation where updated very early in the code that handled the path-following of the agent. I changed it so that the rotation is only updated when the actual movement also is. It seem to have fixed the problem so far.

Peripheral vision

I’ve checked the project plan to see what features I had left to implement and found out that I’ve written the plan a lot less detailed than I remembered, the fact that you as a player should be able to hide beneath something is never mentioned. So according to the project plan, all required features of the project have been implemented!

So with time to spare I’ve started to improve the agent even more. The most obvious flaw for now is that the agent have a very narrow vision. So if the target is close to the agent and slightly to the side it’s never spotted. To combat this I plan to use the same system as in Thief: The Dark Project, where each agent has several view cones to emulate focus, peripheral vision and such.

One quick idea I got where to move the near clip plane of the agents camera. By increasing the distance, the near clip plane will increase in size, resulting in a wider area of vision close to the agent, perfect!

A view frustum. Each side is a clipping plane.

I got it working, but I didn’t quite think it through. With a wider plane, the plane will start to clip through walls. This results in that the agent can look behind walls when it turns around and the near plane spans across both sides of a wall.

So in the end I’ve done it exactly like Thief with several view cones. I got one for the peripheral vision and one for direct vision. Direct vision is used to cover the area close to the agent that falls outside of the view frustum. An entity is added to the view cone of highest priority it exists in so if an entity is inside both cones it will only be added to one. Entities inside the direct vision is added to the list of visible objects. What I’m going to do with entities inside the peripheral vision isn’t fully decided. For now the idea is to increase the probability at those locations causing the agent to gain a higher interest for those areas.

Oh, and I finally added the ability to reset the level. So now I doesn’t need to restart the application every time I’ve need to test something several times.

Slow progress

Last week have been a slow one, but some progress have been made.

I fixed a bug in the POM generator export code. Some times a node would be written twice, which caused quite some strange behavior from the agent as the grid where connected in a bizarre way.

I’ve added the ability for objects to block areas on the navigation mesh. This caused a rewrite of the navigation loading code and forced me to dive back into Detour as this functionality depends on the tile cache available in Detour. It’s the tiles of the navigation mesh compressed and kept in memory so when a obstacle is added, it’s easy to fetch a single tile, decompress it and rebuild it with the obstacle added.

The different planes of the POM is now functional. So objects that blocks the navigation mesh now also blocks nodes on the POM at appropriate levels. This fixes the issue where the agent tries to look at a node that is inside another object.

Other than that I’ve been working on the behavior. The roaming behavior is coming along a lot better. I’ve restricted the area to search close to the agent. This gives the result of a more directed and thorough search. Before the agent would run back and forth between locations.

Today I’ve improved the search behavior in the same way. By using the last known velocity of the target I can define an area where the agent can be in. This prevents the agent from walking away to another place that have accumulated a large probability over time.

I’ve also added the ability for the player to move objects and a simple reaction from the agent when a object is discovered in a new location. With this I also fixed some previously undiscovered problems with the navmesh blocking.

Looking beneath objects

The end of the week is here. The goal this week was to finish the logic system. I would say that is the case as these past few days I’ve only been implementing the behavior of the agent. The system works and can easily be extended with more behaviors and goals for the agent at a later stage.

So far the agent can:
Roam around to find the player, this uses the POM to steer where the agent should look.
Follow the player, as long as the player is visible for the agent, it will follow.
Search for the player, uses the last memory of the player and the POM to steer it’s search.
Look beneath objects, if the player goes somewhere the agent can follow, i.e. hides beneath something, the agent goes as close as possible and bends down to look at the player.

The last behavior isn’t fully functional. For now, the agent goes to the closest position and bends down. But what if the player hides in a tunnel, where only two sides is open and the closest position is next to a tunnel wall? The agent should realize this and try to look through the tunnel at one of it’s open ends. The plan for now is this:

Straight from my position, is it possible to see the target if I bend down?
Yes? The go straight ahead and bend down.
No? Where did the target enter? Go to that position and then bend down.

This will make it seem like the agent is aware of the fact that there’s specific openings to look through.

Behaviors

This past two days I’ve been hard at work on the logic for the agent.

As I mentioned earlier there was a need to redo the pathing of the agent. This was the first thing I finished. After that I threw together a quick goal-driven logic system to implement the easiest action, namely hunting the player. It might sound like a big thing, but when you think about it, it’s all about following a target if I see it. If I can’t see it, some other behavior can take over.

It was quite easy to get functional, and by implementing this basic system I had a better idea of how to design the whole creature system. It consist of three major parts, Sensory Systems, a Working memory and logic. The sensory system feeds the working memory with MemoryFacts about the world. For this project only the vision system is implemented, so raw data about what have been seen is used to create a VisualMemoryFact, it saves away the entity seen, the position it was observed at and the velocity it was traveling at. It’s easy to extend with new types of memory facts for say sounds or smells. When a memory fact is added to the working memory, it is compared to the facts already in the memory to detect if the new fact is just an updated version of an already existing memory. I’ve also added StaticFacts as I call them, they’re used for things the agent should always have knowledge about, such as the POM, the entity it’s hunting for and such.

The logic system then uses these memory facts to act upon. In a perfect world, no queries to new data, say the entities current position, should be made from the logic system. However in reality I have to do it at some occasions as the visual data is gathered on the render thread, so there’s a bit of delay between memory updates.

To get it all into Nebula I use a property to act as the “brain” and glue it all together. It contains the working memory for the agent and takes care of translating raw sensor data into specific memory facts.

Logic and refactoring

This week the goal is to finish the logic system for the agent. I had done some basic groundwork where the agent would use a FSM made up of lua scripts. However, due to some limitations on how Nebula handles the scripts (like that there’s no return value from a script function) I had to redo it in a different way.

As everything has to be done in code now, I also decided to change the whole pattern used as the choice to use a FSM where due to using scripts and the whole system would be easier to design in that way. So now I’m using Goal-driven behavior or Goal Oriented Action Planning (something I researched during the initial phase of the project). The gist of it is to use abstract goals. These goals can consist of smaller subgoals, that in turn can have subgoals. This chain keeps going until a small enough goal is encountered that can be performed as a concrete action, like GoToNextPathPoint or PlayAnimation. I got the basics up today, but with it I realized I now have to redo my path planning, because at the moment it’s pretty much impossible to get the logic to tell the agent to follow a new path due to the path planning being so well contained inside its own entity property.

Normalization issues

I fixed the problem with the probability growing beyond 1. As I sat down with a clear head this morning I divided up all parts of the calculation so I could inspect each intermediate result. The error lied with how the probability where divided to each nodes neighbor.

In the initial formula the diffusion constant where divided by a constant number multiplied with the sum of probability gained from neighbors. This constant number where the amount of connections to a node. The nodes in my map has a varying amount of neighbors so I just replaced this constant with the amount of connections this node currently have. What was a problem is that the formula described how to update the probability of a given node so it’s when I replace the constant with the current node connections that the problem occur. That whole part reflect the amount of probability gained form neighbors but the constant describes the probability given to neighbors. So to fix it, instead of getting the sum of a nodes neighbors probability and then multiplying, I let each neighbor do the whole calculation and give me the exact sum this specific node will give to a neighbor.

I then went on and tried to implement a velocity based spreading. It causes the probability to spread more in a given direction such as the last seen direction the target was moving in. I got it working but also here I had problems with the probability growing over time. In the end I just got tired of it so now after the new probability have been calculated it’s renormalized before culling of the visible nodes is performed.

I don’t know if this will come back and haunt me later but things seem to be working as expected for now.

Back to the beginning

Today I fixed the thing I mentioned in the last post. I elevate the nodes to the agents height so I get all nodes inside the view frustum. It’s basically a cheap projection of the frustum onto the grid. The thing is these elevated points can’t be used for visual confirmation as they’ll check against the wrong parts of the depth buffer or in the case of the nodes close to the agent they’re outside the span of the buffer. So what I did in the end is to gather all nodes inside the “projected” frustum, handle those that can be translated into the depth buffer and for the others I cast a ray from the agents eye to the specific node and check for intersections. This isn’t all that demanding as only a handful of nodes needs to be checked with rays.

I then discovered an interesting bug: the POM loses it’s accuracy quite fast. I discovered that something weren’t right when I tried to implement weighted spreading in the last known velocity of the target. The total amount of probability should always be 1. For some reason I start to gain probability after a few updates and this error is quite significant as a lot of calculations assume the range of probability stays between 0 and 1. I’ve done some debugging but hasn’t been able to determine the problem. It might be floating point precision but that feels a bit unlikely as the error appears after the second or third update. The other possibility is that I’ve implemented the spreading wrong, but I can’t seem to figure out what the problem in that case actually might be.