Your browser (Internet Explorer 6) is out of date. It has known security flaws and may not display all features of this and other websites. Learn how to update your browser.

A fun feature

Came up with a fun idea, for something you can use your agents for.
Graphical artists will no longer be needed, since our AI can do their entire work in much less time.
Just tell the agents that you want a picture of something and, voila! There it is…

Or maybe, it just shows us how much you really need those CG guys after all…

// Johan Forsell

A week of OpenCL and problems

This week was supposed to be about implementing the intelligence, but that did not happen. Instead the week became about implementing two more algorithms on the GPU. These two rather simple steering behaviors, Flee and Cohesion, that from start seemed to be an afternoons work turned out to keep me occupied the whole week. The implementation of the flee algorithm actually went pretty smooth, I just had to add one more cl::Buffer, but the cohesion needed three more cl::Buffers.

Thats when all hell broke loose, then after adding the three cl::Buffers and writing the OpenCL code I started having “random”, or so it seemed at the moment, crashes. After spending a whole day testing separate bits of the CL code I still couldn’t understand what was wrong, mostly because I didn’t get any debug information more than “Your program has stopped working”. After another day of frustrating debugging I found that one of the cl::Buffers was created with an array that was allocated but not initialized.

After this was fixed I had some problem with the AI not behaving as it should, this on the other hand was easy to fix, the problem was some minor mathematical errors.

So the week has been a real pain, but now all the steering behaviors, that we will need for this simulation, is implemented and running correctly on the GPU. This means that next week will actually be about giving the AI intelligence, which is the fun part.

//Johan Österberg

AI Progress

Things are getting along pretty well. I’ve added the ability to load images as discomfort fields allowing for quick map setups, having a threshold value for solid objects where agents will never be able to walk. Basic group behaviors are implemented, however it still lacks the high level control we’d like to have. That will be the goal for next week.

Yesterday I managed to enable a forced non-overlapping feature, earlier, a large group of agents pressing towards the same goal would after some time start to overlap and the crowd compression rate would be too high. For this I’m using a simple constraint method, simply setting the agents who overlaps with another agent position to the nearest position that does not result in an overlap. In theory they still can stand on eachother when part of a large group and it happens all the time, but it is hardly noticeable and in this project it is the visual aspect that matters the most. The method adds very little weight to the already implemented algorithm since the distance to agents within the separation radius is already known, for those overlapping we move them apart by -(agentRadius-agentDistance)*normalizedAgentDiffVector.

New York Streets

What’s next is having the agents stop the staggering effect when in a dense crowd where in reality most peolpe would stand still when there’s no room for further advance. Right now the crowd acts more like an enraged mob where everyone lacks any feelings of empathy and thinks only about themselves trying to reach a set goal. Other than that a smoother turning rate for our tetrahedron-man agents is also on the list for todays implementations.

Not to forget, we witnessed a rather cool effect resembling a fluid simulation when agents max speed were set to high values, resulting in vicious vortices. Our efforts not only seem to result in a greater knowledge base but also in abstract live art, beautiful… :)

Fluid simulation?

// Johan Forsell

Animation – skinning

I’m very happy to say, that I’ve finally got the skinning working! This results in an animated Seymour (as previously used), where the entire skinning process takes place on the GPU. Despite it working though, there seems to be a problem with performance. Since I wanted to get it working, I haven’t really focused on getting it fast, so there’s a lot of work to be done there. An example would be that the shader now makes 37 texture fetches per VERTEX, which is a lot! Loads of these values are constants, so I might as well send them as attributes instead. Also, the matrices, they are stored as pixels, where each matrix is stored as 3 pixels. This results in 3 * 8 (maximum of 4 influences per vertex) texture fetches for each vertex. I got a tip that one could use quaternions instead, resulting in less texture fetches, which is awesome! I also have to work on a distribution system, so I don’t need a separately animated skeleton per agent. I bet you want some kind of proof of what I’ve accomplished, well, you’re in luck! There’s a video!


Bullet, more entity handling and UI struggling.

This last week I have successfully integrated the Bullet Physics Simulation system with our application. The entity manager can create a rigid body, insert it into the dynamicsWorld and the simulation takes over from there.  They’re not rendered yet, but that is coming along as well. The entity manager has also been extended to alllow for the selection of multiple agents at the same time. This allows for attribute changes to be applied to several agents at once, which is useful for testing and observations.

I’ve been scouring the web for a way to make the window decorations show on my embedded widgets, but alas, nothing helps. It seems like it might be a bug in Windows 7, but there are not enough people with the same problem for me to be able to corroborate that. Also, all the examples I’ve tried have showed the decorations, which only heightens my confusion. I will continue trying to pass window flags with appropriate names, but I don’t think that’s the problem.

Anyway, the project’s making headway and we’re all having fun. It’s impressive how much juice you can squeeze out of the circuits if you just try. :)

// Joel

Pushing the AI to the GPU

The conversion from fixed-point to floating-point math is now done, and Johan Forsell has optimized the AI code.

After fixing some bugs we tried simulating 8000 agents, now using floating-point. The simulation ran with 60 fps like before. So we did a rather optimistic test with 50.000 agents expecting zero to one fps, but to our surprise we had 30 fps. We continued for a while just amping up the number of agents, when we reached a million agents without the program crashing we were very satisfied. The simulation with one million agents only had 2 fps, but we were still a bit amazed that the memory didn’t run out.

Next week we are going to start adding the intelligence, I am in charge of single agent behaviors and Forsell is in charge of group behaviors.

//Johan Österberg

Animations – stress test and skinning

So I’ve added support for having more than one skeleton animated at the same time, and then stress tested it. Right now, I can have 500 individually animated skeletons without a major performance drop, on the CPU. I’ll have to investigate if I should move the calculations to the GPU using CL, or if I am going to stick with having the work done on the CPU. The pro is that I might be able to simulate way more skeletons, giving us much more variation. The con is that I might hog the performance needed for the AI. If I’m going to stick with animating on the CPU, I will need to construct some sort of distribution system for the skeletons. By this I mean that if we have 50 000 agents, but only 500 skeletons, I’ll have to make it so that several agents use the same skeleton. This introduces a new problem, what if one agent changes animation? This is something I have to figure out when the skinning is done.

However, I’ve started on the skinning, which means I’m a little behind on my time plan. I’ve started out with sending the skin data to the GPU using a texture. I will also need to send the updated joints to the GPU as well, but I intend to do that later.  Anyhow, after fiddling around a bit, I managed to create and render the skeleton joint rotations and positions, all the skin weights, all the joint influences, and all the vertex influence counts (how many joints each vertex is affected by) using a GL texture. Turns out using texture coordinates and texture filters help 😀

If you’ve ever wondered what matrices looks like in color, this image will show you how much more fun they are in color than as, well, matrices. Each value is stored in RGBA format, so that means a single pixel contains either an entire matrix row plus one position parameter, four skin joint indices, four skin weights, or four joint influences. I’m going to send the matrices as 3×4 matrices, assuming there is no skewing going on. Also, the image is filtered using GL_NEAREST, we wouldn’t want GL_LINEAR ruining our data.

//Gustav Sterbrant

The Navigation

For the whole last week I worked with trying to get the navigation to work properly. The bugs that the system hid were hard to find due to the data oriented approach, but I think I managed to find them all and exterminate them. Today I’ve taken the navigation code that worked last friday, stripped it as good as I can and assigned it to run on its own thread. There could still be a few thread-safing measures to consider, but I think I’ve got it all under control for now. As for performance, after adding the navigation the simulation do run slower. This was expected, but by adding groups, as well as counting with the fact that the chance of every agent wanting to have a path generated at the same time will be very small and additionally a few optimizations might get the fps count back on track.

Tomorrow Johan Österberg will hopefully be done  having the low level behaviors moved to the GPU for computation and I can start adding support for group behaviors. Also since we changed into working with floating point arithmetics, I’ll have to adapt a few things to that as well.

// Johan Forsell

AI with OpenCL

Last week I started looking into OpenCL  so that I would be able to increase the number of active agents.

The first three days I did nothing but read specifications and tutorials, all found at the Khronos Group website.  I then started implementing some of the tutorials to work with our project, this was much more complicated then I had first thought. When it finally started to work I got no performance boost at all, at this time we used fixed-point math for the AI. So because there was no performance boost I decided, together with Johan Forsell, to use floating-point math instead. The conversion from fixed-point to floating-point took the rest of the week and also today (Monday 7/2).

Tomorrow I will begin the reconstruction of the OpenCL code to work with floating-point, and hopefully result in a boost in performance.

//Johan Österberg

Animation progress

So I’ve finally managed to get the animations working! The result of this is an animated skeleton of Seymour, which is playing all of its 13 saved animation clips. This might seem trivial, and it should, but using COLLADA isn’t really that simple. When starting, I found a thread online describing that all the rotation tags in the COLLADA file represents a rotation that should be applied for each joint. I performed this, and constructed the skeleton using these rotations, and it worked fine! So I thought I was doing it right all along, then when I tried to animate it, nothing worked like it should. This is because Maya saves the joints in two sets of data, rotation and jointOrientation. This confused me, because I thought the COLLADA file described a <rotation> tag to be a description of what operation should be applied to the joint in question. However, one needs to save away the jointOrientations separately. This is because the rotation axis around which the animation should be performed, is not around the complete rotation of the joint specified in the file, but using the jointOrientation matrix. So, I had to save that matrix away, and then multiply that matrix with the rotation axis to get the correct rotation axis. Phew, took some analysis to draw that conclusion. The result is the following:

Using our system:Seymour in Chaos

From Maya: Seymour in Maya

Also I’ve managed to store the data in a fairly clever way. Instead of having an animation clip, and having each clip contain a frame, I sort them by time instead. So for every AI I’ve queued a couple of animations, and they are in turn sorted by what time step you are in. So I use a two-dimensional array, where the first dimension corresponds to the time intervals, that is for bezier curves an increment of one between each frame. So the interval on zero will contain all the animation frames of all the animations in their first segment. This will continue until there are no more animation frames to be played, and then it repeats. I have yet to add a dequeue so one can disable certain animations, but that will be added! Also, I need to get it skinned, which shouldn’t be to far away. And oh, right, I also need to put everything I can on the GPU if I want thousands of these guys animated in real time.

//Gustav Sterbrant