Your browser (Internet Explorer 6) is out of date. It has known security flaws and may not display all features of this and other websites. Learn how to update your browser.
X

Final week

So now the time has come to wrap up this project in before the presentations. Last week was the last week of actual dedicated time to the project. Since the last blog post, progress has been good. Loading the world is now stable, as the rays now follow the edges of the world to load in the surface around the hit area. This improvement made the loading acceptable even though you might still see holes here and there for a short moment.

A lot has been done on the landscape generation. The system was rewritten several times due to arising problems. With the addition of biomes, the generation turned awfully slow on both the CPU and the GPU. By examining it, it was clear that the problem was cache thrashing. Cachegrind showed a cache miss rate of 20% compared to 0.4% before. In other words, a more data oriented approach was needed, and this was one of the causes for a rewrite of the generation algorithms to use the data better. It was solved for the gPU side but the problem for now persists on the CPU side.

We also came up with an experimental idea on how to solve lighting. After a quick and dirty implementation, the result was surprisingly good and some tweaking made it even better.

Major rewrites and cleanups have also been done to clear out bugs and tidy up the code. Among others, the bug that made the textures flicker when being far from the origin is now fixed. Along with that one, the whole project was update to modern openGL ways and the shaders were updated to GLSL version 4.0.

Not much time is left until the presentation and the gradshow, so no major features will be implemented. The time will be spent on preparing presentations, tweaking things and depending on time, some minor features might be implemented.

The lighting hack:

A small change in the goals of the project

Due to the reprioritising talked about in the previous blog post, we’ve realised that we want to also shift focus of the goals. Due to wanting to focus on making the project look good and stable, we decided to scrap the feature of having a modifiable landscape. This feature is not really related to the other things we’ve been working on anyway and would only take time without giving much to the end result. It would be an interesting feature to add though if a project like this was to actually be used for something.

Debugging and Refining

Due to the fact that there are only a couple of weeks left for this project, we have shifted our priorities a bit. Instead of focusing on coming up with ideas on how to further extend the view distance, we are putting all effort on how to make what we have now as solid and stable as possible. This mostly involves getting the chunk loading stable and fast instead of having distant chunks pop up slowly over time. The issue is especially visible when only small fractions of non-loaded chunks are visible so that the rays have an extra hard time hitting them.

The work this week has been about polishing the ray casting and free it from bugs that was introduced along with negative coordinates for chunks which altered their behaviour. Much time was also put into finishing the threading and the application now handles everything related to loading/unloading the chunk data (including shooting rays) in a separate thread. This removed lag spikes when geometry is loaded, albeit making them load slower overall. So now the frame rate easily stays at 60 FPS unless really massive amounts of voxels are visible at the same time.

The noise generation was improved as well by porting the noise algorithm to assembler. This was done as a side experiment by a friend to examine the difference in speed. It was shown that the speed gain was huge. From 18ms to generate one chunk down to 3. We then decided to keep that as the cpu side noise generation.

Some time was also spent thinking about how to make the loading more consistent. Unfortunately the only idea which seemed promising (to load everything at once while having LOD representations of the chunks further away) quickly showed completely impossible when we did calculations on how much generation that would need to be done in real time at once. We also have a couple of other ideas on how to improve the smoothness of the data loading which we will try next week.

Ray debugging screenshot:

World generation improvements

This far the generation of the world has been decided by randomly shot rays in front of the camera to see which voxels are visible. This method works but not good, it is to slow and the random component makes the loading look strange. We have tried to fins a better solution this week but have yet to find one. The main problem here is that the rays shot on the CPU is too slow to be able to find the surface of the visible world fast enough.

We have also started implementing threading for the world generation to reduce the lagspikes in framerate we get when many chunks needs to be loaded at the same time. The plan was to use the new std threads but getting C++11 working in windows was a bigger time consumer than planed. However it is now working and the coding can be resumed.

Next week will probably be focused on the same thing as we still have a lot to do in this part of the project.

Milestone 2

The second milestone has been reached. This means that you can find a small demo of our work so far by following the link “MS2” in the menu to the right :)

GPU power

Not much of this weeks work has affected the graphical part of the project and therefore we dont really have any new screenshots or movies to show. However that does not mean that nothing has been done!

Much time has been spent on trying to optimize the noise. We want to use 3d noise to get as interesting landscapes as possible but to generate lots of voxels in real time on the CPU like this was not efficient enough, as we had guessed. So the plan to have the generation done on the GPU was set in motion and we now have an simplex noise implementation in OpenCL. The gain in performance was huge, the cpu noise we had before, optimized with SSE, took roughly 15 milliseconds for 32768 voxels while the same generation on the GPU only took 0.2 milliseconds.

Textures and lighting

Work has kept going at a steady rate this week. The start of the week was focused on getting the SSE optimised implementation of simplex noise to work on both windows and linux. This was a bit troublesome since it was not written with windows in mind and after porting it we ran into problems related to memory alignment and similar.

Texturing was straight-forward to implement and didn’t show any problems. That was finished relatively early during the week. The rest of the week was spent on mainly two things. Firstly, modifying the implementation of the chunk grid that keeps all loaded chunks. It had to be able to follow the camera, aligning itself to it along the grid to make it possible to have an infinite world where new parts are loaded into the scene from the sides when the grid moves. The other feature that was implemented was coloured local lighting. A block may be set to emit light with various levels of red, green and blue. These are combined to create most colours. The light has yet to go into nearby chunks.

 

Progress pictures and videos:

Texturing:

Lighting:

And a bug with the transparancy that looked a bit cool:

The landscape is taking shape

A lot of progress has been made this week. An almost finished data structure is in place and data can be generated with simplex noise to get interesting landscape shapes. The rendering has come a long way too, a lot of cubes can be rendered with the help of instancing arrays in openGL and functions for extractng only the surface of the noise and view frustum culling is implemented.

This is what it looks like so far with some different noise settings and also some pictures from the early stages of the surface extraction:

There is still a lot of work left but it feels like we have a good base to work on for the rest of the project now. Much work the coming weeks will be put on optimizing, especially find a way to solve the problem with having long view distance.

 

Planning and project setup

These last few days have been used to lay down a firm plan for the project. The program layout and the data structure for the first tests have been decided, tasks have been created in redmine and for now we will focus on trying to implement it and see how it works.

We also created a test project that can invoke our landscape generator and now have the first graphical results!

Next week we will be focusing on the code and hopefully complete a big part of the base functionality.

Initial thoughts and research

The goals we have set for this project are not trivial to reach. Many other attempts at voxel based landscape rendering have been done before. Most of them are sup-par to the goals we aim for. The reason for this is the high memory requirement for longer view distances along with the fact that using straight-forward methods for rendering quickly proves to be insufficient when you increase the view radius since the amount of voxels increase cubically. For instance, with the most obvious approach of just keeping every voxel in memory would with a view distance of only 1250 voxels in every direction end up with a memory usage of 1250³ = 1 953 125 000 bytes (1.8 gb) if one byte is to be used per voxel.

Due to these issues, a good data structure is needed. We have been looking around and reading about other similar projects to use their ideas and solutions as inspiration and information source when designing our own system. Although we didn’t find any project with an idea that would solve our problems and make our vision possible, we did learn about various standard approaches and what result they yielded which was very useful for our design decisions.

There are two main methods that people tend to use for data storing in projects like this: Plain arrays or octrees. Plain arrays means that the voxel data is represented using one or several arrays that spatially describe the voxels. Octrees is when you subdivide space into a tree structure where you only represent the voxels at the leaf nodes. Using plain arrays just like that is easy and works quite well. But it doesn’t allow you to have long view distances since the abount of data needed scales drastically. With an octree approach you can easily implement a LOD based method. But it also creates an overhead when working with the data since traversing an octree is more difficult than simple arrays. Besides, it is difficult to find a LOD solution that looks satisfactory with voxels.

To avoid these problems we decided that we need to come up with something more complicated to better manage the data. Our idea is based on the fact that you only really need the data that corresponds to voxels visible by the player. A grid keeps track of the parts of the map that are loaded. Using CPU ray-tracing, it can be decided when to load the needed data.

The design of the data structure will be an active topic through the first weeks of the project and the ideas we have now might change a little. Hopefully we have built a good enough foundation to stand upon to be able to start building the system.