Your browser (Internet Explorer 6) is out of date. It has known security flaws and may not display all features of this and other websites. Learn how to update your browser.
X

Posts tagged ‘Particle System’

[Week 7] Ray-Triangle intersection, friction and I’m starting to get done.

This seems to be it. Even if most of the mechanics are currently locked down in a conveiently named test kernel, they are there and they seem to work. Now I can start working with making the code more readable, isolate parts into their own kernels, and of course optimize the code after doing this. Then I can set up example scenes for the upcoming milestone. For demonstrating them at the gradshow, I may or may not add functionality to alter the system on the fly, adding several force fields and several particle systems at a time; all with activated and/or deactivated kernels.

So, today  I wish to go through the particle-triangle collisions. For the collision detection, I’m using Möller & Trumbore’s fast triangle intersection test, modified as to return ray values of (float)INFINITY if there are no intersecting rays. (or if the ray is behind a triangle). For detecting the actual collision, I make a ray with an origin on the particle, with it’s velocity as its reach, and if I get a proper collision, the distance to the collision point is returned. When a collision happens, I split the current velocity vector in two components; one component that is aligned with the surface normal, and another that is aligned with the surface tangent. This is so that I can apply particle restitution and plane friction using:

V=(1-F )*vT – R*vN.
Where V = velocity,
F = friction,
vT = Tangent Force,
R = restitution,
vN = Normal Force

[Week 6] Well… that was embarassing

So.. I spent most of the week trying to implement a sorting algorithm. I never did that well to begin with, nothing seemed to work quite as I imagined, and I trawled through several papers trying to write my own sorting algorithms.

Nothing of this had any success, so before the weekend, I decided to use some existing projects for sorting. None of which seemed to work properly.

 

In short. NOTHING.  WHATSOEVER. Worked. Apart from my particle kernel. But that one was really simple, nothing much to screw up there. But.. in my desperation, I, today decided to install new drivers. And… it suddently works. Thanks a bunch Nvidia. I had broken drivers all along.

 

So now I have working depth sorting. I’ll post a simple run-through of the “algorithm” later.  And a video. Even if the particle kernel and test emitter still looks the same, this means I can start doing other things. For this evening, I will be clearing up my code however. A video will come at a later time.

[Week 4] Behind on schedule, but puffing on.

Since I spent most of the previous week bedridden, I have fallen behind a week and will have to rethink my planning. Yesterday I did pretty decently however, and now my billboards are in place. Cleaning code has gotten pushed back, and now focus will instead be on functionality. I’ll make it less messy when I feel I have the time.

This is the shader program for my first billboarded particles:

  1. [VS] Pass unprojected points directly down to the Geometry Shader
  2. [GS] GL_POINTS as input, output GL_TRIANGLE_STRIP
  3. [GS] Get the non-projected point. (modelView, no projection)
  4. [GS] Add/Subtract extents from the points for each corner of the billboard, then multiply with the projection matrix.
    projection * vec4(pos.x – extent.x, pos.y – extent.y,  pos.z, pos.w), //Lower Left
    projection * vec4(pos.x+extent.x, pos.y – extent.y,  pos.z, pos.w), //Lower Right
    (etc..)
  5. [GS] Emit vertices accompanied by an outparameter for UV coords,
  6. [FS] Texture the billboard from a sampled texture!

This is mostly done, I just need to put up a texture on the billboard now. One interesting detail to note for someone new to geometry shaders (like me!), remember to generate points following a point only altered by the modelview matrix, otherwise the billboard will be skewed depending on the screen width and height!

Hopefully things will work pretty alright and I won’t have to overthink. I am slightly curious about the difference in consumption between these Geometry Shader billboards and normal billboarded particles. Maybe I’ll have to measure when I have time.

[Week 3] Billboards? Point Clouds?

Now that I am finally not sick, it’s time to try and get things done! The first thing I’m getting to this week is some simple representation of the particles. I want to draw scaled textures from the points. I have two choices for this. I could either make a shader for binding and drawing simple point sprites, these would be very cheap to do, but as far as I understand, they’ve got two issues.

  1. As a GL_POINT is culled out of the screen, the texture will never be drawn (even if it may cover more screen space). This could lead to some strange artifacts. I have yet to test the severity of this, so I can’t speak for how it would look. Chances are it could look allright, but I can find no direct example of it online. I’ll just trust what I read on this.
  2. GL_POINTS are not rendered with depth, so if I’ve understood this right, these point sprites will all be of a certain unit size, regardless of the depth value of the point in the pointcloud. Again, this is something I haven’t tried, but if it is like this, it would be an effect I wouldn’t desire. I could make my own distance attentuation in the shader for this, but there might be a simpler solution altogether.

I first thought that point sprites would be a nice little solution, and true, it may be one. But the points presented above made me consider just doing a simple fragment shader (at first) which would take a point and make a quad facing the camera out of it. This would mean I wouldn’t have to worry about neither culling, nor distance attentuation, as this would be handled by OpenGL itself. This shader could later be changed to create actual, rotated particle geometry, something that might be interesting to peek into later. However nothing that I will focus on during the run of this project.

[Week 2] Slightly ahead, slightly not! Preparing milestone!

http://youtu.be/QONNn1f8TxU

Here’s a video of my work so far. I’ve done a very simple particle engine, capable of taking a “gravity” force, an initial impulse of and position of a particle in time. Everything so far is calculated only on the GPU, apart from the original values. I intend to add functionality to it under the upcoming weeks, as well as start working on some more proper visualisations. There is only one thing I’m behind on, and that is spawning the particles on different times! My solution to this will be to spawn the unused particles under the world until they have passed their first ‘particle death’, where they’ll be moved back in the particle simulation. This is everything that is stopping me from having a particle visualisation that isn’t.. very.. pattern-like?

It’s running 11.2 million particles on the GPU at once in real time. Initial transfer requires three float4 arrays to the GPU (as VBOs). How much is that to transfer? 16 * 11200000 *3  / (1024^2)  = (roughly) 512 MB of data. I think? At around 170 MB of data, only the position information by itself would be nasty to send back and forth between the CPU and the GPU all of the time. Especially considering we want to update the particle field in real time. Thirty or more times per second is desireable, and at this point in time, sending data back and forth from the CPU to the GPU would just be a major bottleneck. Which is why altering the VBO directly on the GPU is so handy.

 

I never thought I would say it; but it would seem I’m one step ahead of my planning. I was very lenient with my time scheduling, and I had been rather paranoid about the troubles of drawing on the GPU. As drawing on the GPU turned out surprisingly simple. I have wasted a lot of time being sick and bothered, so I got less done than I had imagined, but actually sending a vbo to openCL was really easy, especially with the khronos c++ opencl bindings. That was work meant for next week.

[MS1 Concluded] Project Goals

But enough about private issues.

GPU Accelerated Particle System (from here on referenced to as GAPS)  is a 2013 specialization project in the Luleå University of Technology carried out by me, Klas Linde. My goal with GAPS is mainly to learn GPGPU programming practices, to get used to a largely parallel environment. In this very case OpenCL, maybe Cuda.

Particle systems are ideal for such optimizations, and have seen massive performance increases by parallelization in similar projects; and as such I will be making a parallel particle system. My goal is to offload all particle logic, including depth sorting and drawing to the Graphics Processing Unit, which should with some work allow me to integrate an intensive amount of particles at once. If time allows, I will be making the particle system state-preserving, which will allow me to apply forces even after system initiation. This will provide me with plenty of challenges as it is.

I expect to run into at least a few issues regarding parallel computing along the way, as parallel programming is vaguely dissimilar to regular programming towards the CPU. Work will have to be divided into small chunks, with as little workload as possible. As GPUs are well suited for many small workloads rather than one large problem.