I've had some spare time to waste on my SSGI implementation.
I had two VS projects, one dating back to january (whose renders are available in my previous post about SSGI) and the other modified in march.
I didn't remember exactly which modifications I made, but it looks different than the first one, although it suffers from the same problems.
In my previous post I spent some words about the impossibility to fine tune SSGI.. let's see again one of the shots.
As you can see, there are many problems:
1- "E" gets blurred. Since I gather samples around a pixel and every surrounding pixel emits light, the resulting image looks "haloed".
2- There's fake lighting. By fake fighting I mean the shape gets too much light. I'd like SSGI not to start a lighting war against a standard lighting model. SSGI should add a modest contribution, it's not supposed to create fake lights.
3- You can clearly see an halo representing my filter kernel size, this is awful and gets worse when you move the camera. This is due to the SSAOish nature of the algorithm, but there are some tricks to reduce this effect.
4- This is impossible to see, as it's related to the way I'm combining the diffuse and SSGI buffers. Since the contribution is too much heavy, I've been forced to scale SSGI buffer AND blend it with diffuse buffer. I'd like to be able to simply add SSGI buffer.
Since the algorithm suffers from the aforementioned problems, the results are:
1- it's impossible to clearly see a fine detail of a texture. That blurred look could be ok for a dream-like scene, but it's not going to help you render realistic scenes.
2- coherence with local lighting is lost. Lights supposed to gently illuminate geometry produce too much bright areas. It's going to be a nightmare to tune it.
3- the effect is quite awful when moving the camera. Do I need to say more?
4- the artist isn't able to control the overall look of the scene.
I took the modded version and looked for possible solutions. After some work and tests, I came out with a version I think it's better than the previous one. It still suffers from some problem like haloing, but I've some ideas to furtherly improve it.
My goal was to create a "gentle" SSGI shader adding subtle details to the scene.
No SSGI
SSGI
It's hardly noticeable, but things gets better by adding a simple dot(n,l) lighting to the reference image. Here's a closer shot.
No SSGI
SSGI
The "cool" look of the old shots, a-la photon mapping, is still here but is noticeable when looking at small, flat, details:
No SSGI
SSGI
I also ran a simple test on "Sponza Atrium". SSGI haloing is still here and is a bit too bright but as I said now it's easily tweakable.
No SSGI
SSGI
I'm planning to improve the algorithm, in particular I'd like to remove halos and to integrate it into a full-featured render system.
sabato 20 settembre 2008
martedì 16 settembre 2008
(Almost) Back from holidays! Ray Marching inside.
My holidays are almost gone. Next monday I'm expected to be in front of my PC working on some things we didn't complete last month.
I know I'm going to have an headache soon, since ATM we have two branches of the engine. The first branch is the "good" one and dates back to mid july, the other one has been our testing lab for an entire month.
Of course it's buggy but it looks very promising. We're going to fix the second branch and then we'll merge the two. I can't wait to see the new stuff running on the clean version of the engine.
I hope someday I'll be able to post a couple of pics and comment them.
Now, let's move to something better: programming stuff.
Unsurprisingly, as this is a recurring topic in CG, it seems ray marching/ray tracing is going to be the "next big thing". Again.
Maybe you'll want to check this link and have a look at OMPF forums.
I'm not going to share my opinion about the role rays will play in next-gen engines, as "it's hard to make predictions - especially about the future", athough I have to admit I've always been fascinated by "alternative" (non polygon based) rendering algorithms.
To put it simple, a couple of months ago I decided to write a (very) simple ray marcher in my spare time, just to start playing with rays and GPUs.
Today I've added lighting and I'd like to share some pics and links.
First of all if you're interested in ray marching, or distance field rendering you should jump to IQ's website. There's a lot of great stuff there, he also has a developer journal on gamedev.net which definitely deserves (more than just) a look.
I started playing with an heightmap renderer, just to test the basic ray marching machinery.
The idea is quite simple, draw a full screen quad, fire a ray for each pixel on screen and perform steps until it collides with the underlying "geometry", which in the case of an heightmap is a 3D point made up by texel coordinates and the color of the texel itself.
I've decided to go for a true 3D raymarcher, that means I'm able to freely move the camera along any axis. After setting up the rays and displaying stuff around, I've noticed my raymarcher "featured" awful visual artifacts. I borrowed a binary search step from my parallax mapping implementation and it solved them. I wanted to render an heightmap a-la "Comanche: Maximum Overkill". Here's a shot of the first version:
Vintage power!
I expected it would have been enough to enable bilinear filtering and carefully pick up the proper subpixel position to let the GPU smooth my voxels.
Of course that was not the case and I had to write a specific texture fetching routine to smooth them.
Here's the second version:
That's better
I've added a simple dot(n,l) lighting model, note that lighting is calculated via a "blocky" texture fetch, so it's not as cool as the "geometry":
Blocky lighting on near geometry
Of course the ray marcher also works with volume textures and functions (check IQ's papers for that).
I plan to improve the heightmap renderer, in particular I'd like to speed it up by implementing a cone step mapping algorithm. I'd also like to generate the heightmap and color textures via a shader instead of loading them from image files.
Maybe I can get a 4kb intro out of this simple raymarcher..
I know I'm going to have an headache soon, since ATM we have two branches of the engine. The first branch is the "good" one and dates back to mid july, the other one has been our testing lab for an entire month.
Of course it's buggy but it looks very promising. We're going to fix the second branch and then we'll merge the two. I can't wait to see the new stuff running on the clean version of the engine.
I hope someday I'll be able to post a couple of pics and comment them.
Now, let's move to something better: programming stuff.
Unsurprisingly, as this is a recurring topic in CG, it seems ray marching/ray tracing is going to be the "next big thing". Again.
Maybe you'll want to check this link and have a look at OMPF forums.
I'm not going to share my opinion about the role rays will play in next-gen engines, as "it's hard to make predictions - especially about the future", athough I have to admit I've always been fascinated by "alternative" (non polygon based) rendering algorithms.
To put it simple, a couple of months ago I decided to write a (very) simple ray marcher in my spare time, just to start playing with rays and GPUs.
Today I've added lighting and I'd like to share some pics and links.
First of all if you're interested in ray marching, or distance field rendering you should jump to IQ's website. There's a lot of great stuff there, he also has a developer journal on gamedev.net which definitely deserves (more than just) a look.
I started playing with an heightmap renderer, just to test the basic ray marching machinery.
The idea is quite simple, draw a full screen quad, fire a ray for each pixel on screen and perform steps until it collides with the underlying "geometry", which in the case of an heightmap is a 3D point made up by texel coordinates and the color of the texel itself.
I've decided to go for a true 3D raymarcher, that means I'm able to freely move the camera along any axis. After setting up the rays and displaying stuff around, I've noticed my raymarcher "featured" awful visual artifacts. I borrowed a binary search step from my parallax mapping implementation and it solved them. I wanted to render an heightmap a-la "Comanche: Maximum Overkill". Here's a shot of the first version:
Vintage power!
I expected it would have been enough to enable bilinear filtering and carefully pick up the proper subpixel position to let the GPU smooth my voxels.
Of course that was not the case and I had to write a specific texture fetching routine to smooth them.
Here's the second version:
That's better
I've added a simple dot(n,l) lighting model, note that lighting is calculated via a "blocky" texture fetch, so it's not as cool as the "geometry":
Blocky lighting on near geometry
Of course the ray marcher also works with volume textures and functions (check IQ's papers for that).
I plan to improve the heightmap renderer, in particular I'd like to speed it up by implementing a cone step mapping algorithm. I'd also like to generate the heightmap and color textures via a shader instead of loading them from image files.
Maybe I can get a 4kb intro out of this simple raymarcher..
venerdì 11 luglio 2008
SSGI
I recently read an interesting post by Wolfgang Engel here.
The idea is to calculate indirect lighting in screen space via a technique which resembles SSAO.
I've been playing with something similar a few months ago (january). At that time, the engine had a very simple rendering pipeline.
After playing with SSGI a couple of days, here's the result I got:
Now, a few months later, the engine has a complete "render system" but there's no support for SSGI.
The main problem with my implementation was its (lack of) tweakability. Furthermore, it's heavily scene dependent and I've been so busy on the engine that I never got the chance to fix some artifacts (partly derived from my old SSAO implementation).
I also took some comparison shots to show the effects of SSGI contribution.
Albedo-only
With SSGI
I'd like to have some spare time to play with SSGI and see what it looks like after being fixed and applied to an image coming from a complete render system.
If I'll ever get a good SSGI, I'll definitely post something here.
The idea is to calculate indirect lighting in screen space via a technique which resembles SSAO.
I've been playing with something similar a few months ago (january). At that time, the engine had a very simple rendering pipeline.
After playing with SSGI a couple of days, here's the result I got:
Now, a few months later, the engine has a complete "render system" but there's no support for SSGI.
The main problem with my implementation was its (lack of) tweakability. Furthermore, it's heavily scene dependent and I've been so busy on the engine that I never got the chance to fix some artifacts (partly derived from my old SSAO implementation).
I also took some comparison shots to show the effects of SSGI contribution.
Albedo-only
With SSGI
I'd like to have some spare time to play with SSGI and see what it looks like after being fixed and applied to an image coming from a complete render system.
If I'll ever get a good SSGI, I'll definitely post something here.
Iscriviti a:
Post (Atom)