Your browser (Internet Explorer 6) is out of date. It has known security flaws and may not display all features of this and other websites. Learn how to update your browser.

Goal chaining, and optimizing.

This week I’ve implemented functions for pre-generating a flow field. It is very slow but it works. I also added some extra features to the map loading, so that you can mark out specific chokepoints the agents should know about. That is to ease up the preasure on A* calculations.

I myself and Simon also managed to remove unnecessary data transfers between the GRAM and RAM, by having OpenGL and OpenCL share memory where you update and store the agent orientations. Gustav helped us to set up the kernel so that the memory could be shared. So now we don’t need to update the orientations with OpenCL, write the data to RAM then save it to a texture which in turn is written to the GRAM again. That will save us some valuable time, however a bug appeared, and agents face the wrong direction in most cases.

You can also assign several goals to agents, which they will follow up in given order. However, goal chaining can not yet be done through real-time interaction. But adding the functionality should not be much of a hassle since the structure and logics are already finished, all you need is a user interface.

A heartbeat functionality has been written which can be used to update some states at set time intervals, instead of every frame – optimization.

Also, when a group needs to calculate a path, only the agent with most leadership within the group will do it and share the result with the rest of the group members.

We’re closing in on deadline and only a few more things remain. Hopefully those can be adressed and implemented/fixed tomorrow.

// Johan Forsell

Short update

This update will be a bit shorter than usual, a bigger one will be posted on Sunday.

The behaviors are starting to look pretty good, there are still a bit of adjustments to be done but its mostly small numerical changes. I have under the week been sitting mostly with making the agents act less as mindless idiots and more like actual human beings.

More info to come!

//Johan Österberg

Behavior week

This week has been a bit of a hassle, in the beginning of the week I implemented the behavior structure that I decided on last week. Then when I was done it wasn’t as good I thought it would be, so I changed the design. When the new design was done and implemented, it too wasn’t good enough. So I sat down again to really think it through and come up with a new design that would be what I wanted.

But that went very slow, so I decided to try and fix a memory leak connected to the OpenCL code instead. With a little help from Johan Forsell the leak was fixed yesterday around lunch. So after that I started with the behavior design again, with a slight morale boost after the leak fix the design work went by smooth.

The implementation on the other hand turned out to be a bit harder than what i had anticipated, but now it is place and a feel really good about this design so hopefully my colleagues will feel the same.

//Johan Österberg

Behavior RnD

This week I have mostly read about different modern techniques of implementing AI behaviors. For the moment I only have my behavior structure on paper, I like to have a rough design of my code on a piece of paper, in this case many pieces of paper, before I start to implement it. This often leads to fewer troublesome surprises and strange bugs, so the implementing of the code will start sometime in the middle of next week.

I have also pushed some more code to the GPU, this time I chose to move the position updating functions from the CPU to the GPU. For the moment I only get a slight performance boost out of this, but I believe it to be greater in the future when a bunch of small quirks are present in the AI code.

//Johan Österberg

They behave…

This week most of my efforts have been put to make the agents act less aggressive on their way to reach a set goal.
They won’t push around eachother as much and they will stop when close enough to the goal point.

In addition to that I added so that the agent speed is directly linked with the crowd density, the more people in close proximity the slower they move.
The actual speed scaling might still need some polishing, but that’s nothing more than tweaking a math algorithm for the most realistic feedback.

And lastly, agents can now be assigned goal areas in addition to goal points. For now it’s only basic shapes but it shouldn’t take too long to make it work for any
concatinated area of shapes. The goal areas need some additional programming so that the agents change their behavior while inside their goal area.
That’s first out for next weeks work.

// Johan Forsell

A fun feature

Came up with a fun idea, for something you can use your agents for.
Graphical artists will no longer be needed, since our AI can do their entire work in much less time.
Just tell the agents that you want a picture of something and, voila! There it is…

Or maybe, it just shows us how much you really need those CG guys after all…

// Johan Forsell

A week of OpenCL and problems

This week was supposed to be about implementing the intelligence, but that did not happen. Instead the week became about implementing two more algorithms on the GPU. These two rather simple steering behaviors, Flee and Cohesion, that from start seemed to be an afternoons work turned out to keep me occupied the whole week. The implementation of the flee algorithm actually went pretty smooth, I just had to add one more cl::Buffer, but the cohesion needed three more cl::Buffers.

Thats when all hell broke loose, then after adding the three cl::Buffers and writing the OpenCL code I started having “random”, or so it seemed at the moment, crashes. After spending a whole day testing separate bits of the CL code I still couldn’t understand what was wrong, mostly because I didn’t get any debug information more than “Your program has stopped working”. After another day of frustrating debugging I found that one of the cl::Buffers was created with an array that was allocated but not initialized.

After this was fixed I had some problem with the AI not behaving as it should, this on the other hand was easy to fix, the problem was some minor mathematical errors.

So the week has been a real pain, but now all the steering behaviors, that we will need for this simulation, is implemented and running correctly on the GPU. This means that next week will actually be about giving the AI intelligence, which is the fun part.

//Johan Österberg

AI Progress

Things are getting along pretty well. I’ve added the ability to load images as discomfort fields allowing for quick map setups, having a threshold value for solid objects where agents will never be able to walk. Basic group behaviors are implemented, however it still lacks the high level control we’d like to have. That will be the goal for next week.

Yesterday I managed to enable a forced non-overlapping feature, earlier, a large group of agents pressing towards the same goal would after some time start to overlap and the crowd compression rate would be too high. For this I’m using a simple constraint method, simply setting the agents who overlaps with another agent position to the nearest position that does not result in an overlap. In theory they still can stand on eachother when part of a large group and it happens all the time, but it is hardly noticeable and in this project it is the visual aspect that matters the most. The method adds very little weight to the already implemented algorithm since the distance to agents within the separation radius is already known, for those overlapping we move them apart by -(agentRadius-agentDistance)*normalizedAgentDiffVector.

New York Streets

What’s next is having the agents stop the staggering effect when in a dense crowd where in reality most peolpe would stand still when there’s no room for further advance. Right now the crowd acts more like an enraged mob where everyone lacks any feelings of empathy and thinks only about themselves trying to reach a set goal. Other than that a smoother turning rate for our tetrahedron-man agents is also on the list for todays implementations.

Not to forget, we witnessed a rather cool effect resembling a fluid simulation when agents max speed were set to high values, resulting in vicious vortices. Our efforts not only seem to result in a greater knowledge base but also in abstract live art, beautiful… :)

Fluid simulation?

// Johan Forsell

Pushing the AI to the GPU

The conversion from fixed-point to floating-point math is now done, and Johan Forsell has optimized the AI code.

After fixing some bugs we tried simulating 8000 agents, now using floating-point. The simulation ran with 60 fps like before. So we did a rather optimistic test with 50.000 agents expecting zero to one fps, but to our surprise we had 30 fps. We continued for a while just amping up the number of agents, when we reached a million agents without the program crashing we were very satisfied. The simulation with one million agents only had 2 fps, but we were still a bit amazed that the memory didn’t run out.

Next week we are going to start adding the intelligence, I am in charge of single agent behaviors and Forsell is in charge of group behaviors.

//Johan Österberg

The Navigation

For the whole last week I worked with trying to get the navigation to work properly. The bugs that the system hid were hard to find due to the data oriented approach, but I think I managed to find them all and exterminate them. Today I’ve taken the navigation code that worked last friday, stripped it as good as I can and assigned it to run on its own thread. There could still be a few thread-safing measures to consider, but I think I’ve got it all under control for now. As for performance, after adding the navigation the simulation do run slower. This was expected, but by adding groups, as well as counting with the fact that the chance of every agent wanting to have a path generated at the same time will be very small and additionally a few optimizations might get the fps count back on track.

Tomorrow Johan Österberg will hopefully be done  having the low level behaviors moved to the GPU for computation and I can start adding support for group behaviors. Also since we changed into working with floating point arithmetics, I’ll have to adapt a few things to that as well.

// Johan Forsell