Hi guys,

Just to update everyone that we've updated the new Liquid Rock Games website (http://www.liquidrockgames.com) and moved our blog to Wordpress.
Posted by Prometheus
We have just released Word Zen Unlimited for USD 2.99 !

* Advertisement free! No advertisement pop ups for Word Zen Unlimited.
* Save and continue your game anytime you like and upload your high scores later.

Purchase available here:
Posted by Prometheus
Hi guys,

We have just updated Word Zen to version 1.1.

Changes include:
1. Fix bug on ad buttons not showing correctly.
2. Apk install now installs to SD for froyo.
3. Solved certain input not responding issues.
4. Added return button to exit app in main menu.

Download the updated version from the official Word Zen website: http://wordzen.liquidrockgames.com/
Posted by Prometheus
Check out the Word Zen gameplay in action :D

Posted by Prometheus
Some tips on how to rack up scores in Word Zen quickly:

1. Make plural words.
2. Destroy obstacle tiles whenever possible by making them fall 2 tiles as they produce the most points.
3. Use shuffle often !
4. If you're stuck in the bonus round, click on the Pass button to skip to the next puzzle fast !
5. Know your 2 letter words such as: AD, AH, AR, AW, BI, ED, EF, EH, EL, EM, EN, ER, ES, EX, FA, HA, HM, HO, LA to name a few.
6. Know your 3 letter words such as: GEE, WEE, TEE.
7. Common word endings to include are: -ING, -ED, -ER, -IER, -IEST, and -IES.

Visit the official forum for more tip updates!
Posted by Prometheus
Posted by Prometheus
Thanks guys for the support ! For those who has not tried out Word Zen yet, get it from the official website. :D

Posted by Prometheus
Hurray its finally finished! Liquid Rock Games is proud to present Word Zen for Android phones and tablets. :D

Official Word Zen website: http://wordzen.liquidrockgames.com/

About Word Zen
Word Zen is a simple yet addictive word puzzle game from Liquid Rock Games with a relaxing zen-like theme which will appeal to players looking to test their word vocabulary.

With thousands of official crosswords words to form from, the possibilities are endless!

* Swipe letter tiles in any direction to form new words!
* Over 100,000 words to solve as seen in official crossword games!
* Global leader boards for players to update their scores and to compete for world ranking.
* Use special tiles to your advantage and form new strategies.


Posted by Prometheus
Over the past week I've been doing a major surgery on the core Solid State Engine to restructure our physics data format especially on the batching front. Due to the requirement of invisible physics meshes for certain simulations in Aftershock, I had to fix this long overdue feature. Previously, batched meshes from our editor are forced to be static physics meshes thus allowing us to do precise world collision. However, there are times when we don't require such precision or when we want an invisible wall to keep things within a boundary.

Being so, I've redesigned the internal batching architecture to take into consideration of invisible physics meshes. To improve efficiency, I made a new file format to define physics meshes which can both be exported from Blender3D or from our world editor.

After all said and done, a major surgery can be quite an issue. Unexpected bugs started to emerge all over the place. I've probably fixed about 20+ of them already. There's still some more to go at the moment. At any rate, with this new change, our fellow artists will need to reexport their prefabs to the new format. Fortunately scene structure still remains the same. So it's not too big a deal. On the plus side, we get a much more well defined physics system and a more efficient file format. In the process, I've also removed the need of physics definition files for visible static meshes. This will cut down a massive load of unnecessary files which would lower our level file size! Yeay!

Ok. Back to debugging and get this work ASAP. Major surgery sucks. *_*
Posted by Lf3T-Hn4D
Updated the grapple machine with 2 rotating rings.

Posted by Prometheus
Very early weapons FX test for the disruptor weapon.

Posted by Prometheus
Wow, this blog post is long overdue. The past few months had been a rather busy month for me. I had to juggle between a few projects which saw work on aftershock slowed down to a crawl.

However, I'm now back to speed. :-D

So today I'm going to talk about our effects system. The effects system has been in my head for a long time. I had been doing lots of thinking through making sure that my idea was sane and useful. I'm rather proud to say that the effects system is very well capable of complex effects that combines lighting, particles, ribbon trails and even animations.

The idea was not a revolutionary one though. What I came up with was a node graph system to define a behavior of a given effect. Imagine blender's material node system but instead of applying on materials, we apply on entities. If you're quick to notice, this idea is pretty much a high level primitive scripting system. Only that instead of using scripting, we build nodes. One could also think of it as shader nodes for effects.

However, to keep things simple, I omitted conditional nodes and forced all value type to be float. Hence we only have float, float2, float3 and float4 value type.

Bellow is an example of how a flickering light effect can be described with the node system:

Figure 1 - Flicker Light Effect Example

To aid the ease of node building, I've also made the node system do auto conversion between the defined types. This mainly means single float can be easily converted automatically to multi float type. Any other combination is not possible as that would be ambiguous as shown bellow:

Figure 2 - Value Type Auto Conversion

Adding on top of that, I've allowed nodes to support multiple slot mode. This is similar to the method overloading of C/C++. The only difference is that the number of input/output slot must retain the same. Only the value type can be different. With this, depending on how the node graph is linked, my node compiler will use the best fit mode for any node processing.

Bellow is an example of how it would work given a mock up node graph scenario:

Figure 3 - Multi Slot Mode Use Case Example

So why do we need this? This is very useful for primitive operations for example Addition, Subtraction and Multiplication nodes which requires different input and output value types for all possible combination.

Once the effect node graph is defined, we can simply bind them to entities that demands the given effect. :-) Obviously the whole implementation and design is much more complex compared to this. However I felt the node graph idea to be most interesting to report about. Imagine the possibilities of extending this into animation systems which interestingly is very similar to the animation blend tree idea.

Oh yeah, one nice effect of the node system is that when designed carefully, we can reuse the compiled node graph on all instance of the same effect in the scene. Also, if written properly, this is actually much more optimal compared to using a scripting engine which I personally feel should be left to handle less CPU intensive logic like game rules.

There, I hope this is long enough to justify for my lack of blog post. I will put up some programmer art effects video if possible when I get the effects system running and tested. As of now, I already have the node graph system and entity binding in place. However, I've yet to deal with all the possible node types I planned to add to allow complex effects. Still, having the base system means we're pretty close to a complete effects system already. So stay tuned. ;-)
Posted by Lf3T-Hn4D
Finally another update after a long holiday and some projects and it's back to Aftershock :D

Shown here is an in game shot of the 2nd craft in progress.

Posted by Prometheus
Phew...sigh of relief upon completion of another project for a client. :D Time for some rejuve and then its back to Aftershock ! >:)
Posted by Prometheus
Decided to go for simpler icon looks for the powerups which players can pick up along the tracks as the older powerups were harder to identify from afar.

Here's a sample rocket pack powerup.

Posted by Prometheus
Small real time outdoor test scene to test paging geometry as well as our new water shader materials. Several future wish list improvements include more material blending for terrain surfaces. Further stress tests and optimizations for dense foliage systems needed.

Posted by Prometheus
More screenshots on our SSAO implementation. This time with all textures+lighting+post processing on.

Image01 Before SSAO

Image01 After SSAO

Image 02 Before SSAO

Image 02 After SSAO

Image 03 Before SSAO

Image 03 After SSAO

Image 04 Before SSAO

Image 04 After SSAO

Image 05 Before SSAO

Image 05 After SSAO

Image 06 Before SSAO

Image 06 After SSAO
Posted by Prometheus
Touch this powerup and several mines spawn on the track within the area.

Posted by Prometheus
Just completed this missile pack powerup to replenish those used missiles.

Posted by Prometheus
Oooh yeah. Left-Hand just added in screen space ambient occlusion (SSAO) to our Solid State Engine. Now all added details such as normals really POP out ! >:)

Image01: Without Screen Space Ambient Occlusion (BEFORE)

Image01: With Screen Space Ambient Occlusion (AFTER)

Image02: Without Screen Space Ambient Occlusion (BEFORE)

Image02: With Screen Space Ambient Occlusion (AFTER)

Image03: Without Screen Space Ambient Occlusion (BEFORE)

Image03: With Screen Space Ambient Occlusion (AFTER)

Image04: Without Screen Space Ambient Occlusion (BEFORE)

Image04: With Screen Space Ambient Occlusion (AFTER)

Image05: Without Screen Space Ambient Occlusion (BEFORE)

Image05: With Screen Space Ambient Occlusion (AFTER)
Posted by Prometheus
Early stage ammopack powerup FX test.

Posted by Prometheus
Just a day before I tried back network play of aftershock and to my dismay the jittering issue came back. I thought I had nailed it but apparently I did not. So I delved more into the design of my code and did tons of debugging again. After a few test cases, I finally found out the issue. It was two fold.

Firstly, there was an issue where my client side's simulation was shifting farther and farther away from the server's simulation due to time discrepancies between both systems. Secondly, my camera simulation code was not entirely in sync with the physics simulation code when applying frame smoothing.

After some tinkering, I managed to fix both issue. For the first issue, I kept a 5 frame buffer of frame latency to get the average latency across snapshots. From there, I use it to cap the frame difference between the current physics frame and the given server physics frame.

Now everything works like a charm and no more jittering from my test case. In fact it's so smooth compared to my last work that I'm mightily proud of it! :-) I do know that I'm not revealing much of how I did my network syncing here. I'm rather busy working on many things right now. But I promise I'll put up a post with some diagrams some time later to explain the approach I used. In short it's what Gaffer suggested for the server client approach though like I found out as I did this, it's not as simple as it sounds. There's slightly more to it than just sync and smooth.

Aside from that, I wish that bullet provides a proper method for me to selectively simulate a group of rigid bodies. This would help a lot for the prediction simulation after snapping of server update state; we don't want to end up wrongly simulating non networked local rigid bodies or non important rigid bodies that are not part of the network update snapshot.
Posted by Lf3T-Hn4D
Decided to work on our new Solid State Engine logo which is responsible for all the visual goodness which is seen in Aftershock.

The 2 gears in the center represent our engine's earlier loading screen :D created by Lih-Hern.

Posted by Prometheus
I could not resist playing with our new semi-deferred renderer and decided to add play around with more real time lights in our editor >:) Imagine it with lightning effects emitting from the device.....

We'll be able to place in virtually unlimited lights real time for not only indoor scenes but also FX effects like fairy dusts...sci fi weapons blazing....the possibilities are just endless !

Posted by Prometheus
Just thought I'd post some prop updates on the next track level which features an industrial powerplant-nuclear feel to it which will be rearranged later. Decided to go on smaller building blocks for the buildings as it'll allow me to scale the size of the level easily. The track is meant to be surrounded by industrial like complexes. So expect lotsa labyrinthine tube like old machinery for the level.

Posted by Prometheus
Heres a sample drain water test for the industrial level theme. A lil difficult to view the refraction on the base adn reflection with this youtube vid but its there.

Posted by Prometheus
As mentioned, previously, I pointed out two method to render volumetric lights. I have successfully implemented the first technique for both point and spot lights. Here's a screen shot of it in action(4 point lights & 1 spot light):

Notice the frame rate with my crappy hardware. This is with shadow, HDR and refraction on. Previously, when we were still using forward rendering, I get about the same FPS at ~40. This has proved my speculation was right that when all features are on, I will not see any significant frame rate changes. In fact, we gain the ability to do cheap lighting! The old renderer would have done a horrible job to produce the screen shot above. Hence, deferred lighting FTW! :)

However, since the current method is rather brute force in that all lights will use stenciling technique, this means we might hit a limit quite quickly(probably around 30+ lights). To solve this issue, I have plans for light batching. The idea is to group lights that only encompass a small area in screen space. Then we use shader instancing to render the lights without the stencil pass. Obviously this will end up rendering some redundant portion of the screen. However, I do believe the optimization on less render state changes will offset this cost. As to what is the right screen ratio, I currently have no clue. I probably need to build a test level with tons of lights to figure out. :)

As for the second technique I mentioned during the last post, I intend to use that for custom non convex light volumes. This method allows me to scrap the light group masking idea I had which not only complicates rendering, it also gives the level designers a lot of headache. The goal for this custom light volume is to allow level designers to define light volumes that will not leak into areas they do not want to due to no shadowing; for example spot light casting across adjacent room.

So there, deferred lighting conversion, success! Woot! :-D I must say I have learned a lot over the course of attempting this rendering "upgrade". Now I finally understood the tricks of getting view space position from depth buffer. I was having some trouble especially on figuring how to generate proper far clip plane coordinates with a volume mesh on the screen. But thanks to nullsquared the genius, he solved it for me in a forum post I found. Still, let's not get into the details less I become too long winded; which I already am. :-P That's all for this post.

Posted by Lf3T-Hn4D
Local lights are the main factor that made deferred lighting so appealing. The fact that they are rendered in screen space allowed the process to be fillrate dependent instead of vertex bound; where in the forward renderer, for each light lit on a geometry, we have to send the whole geometry to the GPU for rendering.

However, deferred lighting incurs fillrate issue that is typically encountered with particle effects. In the naive approach, one would render quads on the screen representing each light. This method though working well with small attenuating lights does not work well at all for lights that covers a huge volume of the screen space. Hence we would like to come up with a solution of optimal lighting where only affected pixels are rendered.

The answer to that problem is light volume rendering. Instead of rendering quads on the screen, we render volume meshes that represent the lights; sphere for point and cone for spot. Being that they are real true mesh rendered in the scene, we could utilize the Z-Buffer depth test to cull areas that are occluded (Figure 1 - A). Unfortunately, this is not optimal most of the time. With only simple volume mesh rendering, surfaces that are not affected behind the volume mesh will get uselessly rendered as well. Another way about this is to render the backface of the volume with a "greater equal" depth test (Figure 1 - B). This method would cull off unnecessary surfaces that are behind the light volume. However, it does that at the cost of never culling for surface in front of the light volume. Hence such naive methods do not work well at all for what we are trying to solve.

To illustrate, lets give a 2D visual of a given scenario:

Figure 1 - Naive light volume approaches

Interestingly, there are a few methods to solve this and they all uses the stencil buffer. The idea in fact came from the good old stencil shadow volume rendering technique. There are many articles out there on this topic if anyone is interested. However, to summarize, these are the two more popular technique used: Depth fail hybrid and XOR stencil technique.

Let's start with the first. To understand this technique, we need to first take note of one interesting fact. If you look closely at Figure 1, you would have realized that to get the desired result, we could combine the two naive approach with an AND logic operation!

Figure 2 - AND logic operation

Now with that in mind, lets talk about how the depth fail hybrid technique works. The reason why I called it a hybrid technique was that instead of the full depth fail stencil volume technique which uses two stencil render passes, the technique applies the first depth fail stencil pass and then combine the second depth test pass with the lighting render while applying the generated stencil mask. Effectively this is equivalent to the AND logic operation but with better fillrate optimization due to using one stencil pass instead of two (I'm referring to stencil shadow).

Figure 3 - Z-Depth fail hybrid

Lets get on to the second technique. This is actually a less common technique due to it being more fillrate intensive compared to the previous technique. The concept of this technique is to flip the stencil value without triangle face culling. Hence, it pretty much mimics the XOR logic operation. Truth be told, this technique could be done in two ways; Depth Pass(Figure 4 - First row) or Depth Fail(Figure 4 - Second row).

Figure 4 - XOR logic operation

From what I gather, the latter implementation seems to be the implementation of choice just as the Z-Depth fail technique. If I understood it correctly, it is due to typical scene structure where camera views are mostly looking at sparse areas instead of having plenty of near eye occlusions. Hence it will optimize off some fillrate in the stencil pass. Also, an interesting note is that doing a Z-Depth fail technique for this XOR method actually avoid the problems of when camera is within light volume.

Having said that, the light volume stenciling technique obviously does not follow stencil shadow techniques. Due to this, the two mentioned technique actually assumes some limitations.

For Z-Depth fail hybrid technique, the system assumes enclosed convex light volume. The usage is also not applicable when camera is inside the light volume (Frontface is behind camera resulting in no lighting).

For the XOR Z-Depth fail technique, the system assumes enclosed light volume. However, due to it's nature of XOR operation, the light volume can be concave type as long as there are no intersection issues. This is an interesting behavior as it means we could have specially controlled light volumes that will not bleed into areas we do not want!(Think point and spot lights that has special light volume to avoid bleeding to adjacent rooms) Adding on to that, as mentioned before, this technique do not suffer from the camera in light volume glitch. Obviously there's always a catch when things look so good. The fact that we have to turn off backface culling during the stencil pass means that our stencil rendering is more expensive.

Phewh~ this is a long topic and I'm still not done yet. To think most reference slides describing these methods I could find online discussed them in two slides. O_O Anyways, to conclude for this post, I've outlined two methods that I will integrate in our deferred lighting pipeline. My next blog post will talk about when I would use them. ;-)
Posted by Lf3T-Hn4D
I have completed porting directional shadow casting and refraction to the new system. This means we now officially have all the original features running with a deferred lighting renderer. A test run with all features on (single shadowmap, HDR and Refraction) got me an average of 40fps at 1024x768. Overall, that's pretty much a 10fps dip from previous test. This is mostly due to the shadow casting cost which shows how expensive shadow rendering is. I'm a bit disappointed with this since it makes PSSM useless even on fast machines. I'm planning to have a way to allow artist to define non shadow casting batched entities in the editor to narrow down shadow casting to where it matters. Hopefully this will improve shadow casting performance.

Aside from that, I've also changed the depth G-Buffer format into PF_FLOAT16_GR to add one more channel for material ID. I came to the conclusion of needing this due to the limitation of any deferred renderer which is having different materials with different lighting properties. The main reason I added this functionality so soon was that our tree leaves and grasses are using custom shading. This limits us from utilizing the deferred lighting system to lit them. It would be fine for the first level since there's only one light through the whole scene. However, we have plans for night scenes in the future which wouldn't work very well then. Hence with material ID introduced, the custom shading for leaf and grass is now done in the deferred stage. Obviously this potentially allow us to extend it even more into other types of materials like cloth.

Unfortunately since we are trying to limit the G-Buffer's fatness, we're limited with the number of channels we have to store data. This means that any material type that requires extra info cannot be integrated. Unless we introduce 64bit buffers, this is not possible.

At any rate, with this done, our artist can now happily do outdoor night scenes that has lights affecting grass and tree leaves. My next move is to get the major feature of deferred lighting in: spot and point lights. Since this blog post is already getting long, I'll leave my local lighting thoughts to my next post. So stay tuned. ;-)
Posted by Lf3T-Hn4D
I finally got to the stage of being able to render our first level using the deferred lighting renderer. It is still not complete yet however. Only the default materials required for level01 has been updated. Also, due to this changes, refraction needs to be redone in a different approach.

Along the way, I actually hit a few problems. First one was due to my mistake. When designing the G-Buffer, I assumed I will be able to construct the lighting pass with just the depth and normal. I was wrong. I needed the specular power value as well. Hence, I had to go back to the drawing board again and decided to go for the least accurate model of R8G8B8A8 where we store compacted view space normal map in the RGB channel and spec power in the Alpha channel. Interestingly, it turned out pretty good. So much for the "not accurate" crap mentioned in the CryEngine3 power point presentation slide. Personally, the inaccuracy is not really distinguishable. Besides, with proper good textures provided by artists, this small error isn't really a big deal; especially considering that we are not trying to achieve realism. What we want is beauty and style. :-)

Another of my issue was the way Ogre did its rendering. For every render target, Ogre would do a scene traversal to find all visible renderables and render them. This I found unacceptable. Reason being that it means Ogre would traverse the scene at least twice; First being G-Buffer stage, second being the final compositing and forward rendering stage. This is a waste of cpu resources hence I ended up listening to the render queue event during the G-Buffer pass and keep my own copy of all queuing renderables. Then I manually inject the render queue during the final compositing stage with a custom subclassed SceneMgrQueuedRenderableVisitor that tries to refetch the right material technique base on the new material scheme.

And the end result? I had our first level running at ~40-70fps with an average of 50fps at 1024x768. This is with HDR on but without Shadow. Not too bad for a crappy 8600GTS.

Oh, one thing interesting to note is that since the G-Buffer stage does not require much texture sampling, it actually renders really fast. That being so, and because we keep the Z-Buffer intact throughout the whole process, we actually gain some performance during the final compositing pass due to early Z-out. So you loose some, you win some. :-P (In theory, if you do a pre Z-only pass before filling the G-Buffer, you might speed up more if your scene is complex. But it will also increase batch count. So I'm not too sure if it's worth while.)

Unfortunately for us though was that because we did not planned to have deferred lighting from the start, we had to abide by some bad decisions done in the past. One notable issue was that our G-Buffer stage requires the diffuse, normal and spec map in the worst case scenario. This is because in the case of an alpha-rejection shaded material, we need to sample the alpha channel of diffuse for alpha-rejection, and specular power from the spec map. This means that we are sampling at least two textures for each material during the G-Buffer stage. This is not ideal as we should try to sample as little textures as possible in this pass.

That said, if I could fix this, I would make specular power part of the normal map's alpha instead of in the spec map's alpha; making only one texture sampling needed typically in the G-Buffer stage. This would also leave an extra alpha for reflection factor in the spec map for envmap reflective materials; which would be a win win solution(less one reflection factor texture map). Sadly, we're already a long way in art asset creation. Changing this now would mean loads of work fixing the old textures and materials.
Posted by Lf3T-Hn4D
I've begun working on light buffer generation. While tackling this, I realized my normal buffer generation isn't accurate. I'm using the view space normal compressed with only XY where Z is reconstructed. While using 16bit float keeps accuracy pretty well, we loose the Z's sign info and this does not bode well for us. The reason is that while for the most part Z is always positive facing, there is bound to be edge cases where it is not. Especially when we are using normal map to distort a surface's normal. So I begun a search to find if I could improve my normal buffer rendering. To my surprise, someone by the name of Aras did the homework for me. Thanks Aras! :)

Just so you know, I'm not a math junkie. So deciphering what was done within that page took me quite a good while. That said, I decided to pick two method out of the selection to test out; Stereographic Projection and Cry Engine 3 method. From what I understood, Cry Engine 3's method sacrifices the X, Y's accuracy for a very good Z value. Comparatively, Stereographic Projection produces more even distortion. However, since we're using two channel 16bit float g-buffer format, both data loss is pretty insignificant. After some testing, I came to the conclusion that both method produce almost 100% similar lighting. The difference is unnoticeable to the naked eye. Hence, I decided to use the cheaper of the two, which happens to be Stereographic Projection.

So.. Normal G-Buffer fixed!

Now, as we know, shadow casting is a very expensive process. So for the most part, we would want to avoid shadow casting. However this leads to one problem; light leaking. Given a light in a room, if light is not applied with shadow casting, the light will leak through the wall into the adjacent room. This becomes an ugly behavior. So how does one fix this? When using forward rendering, we had the flexibility of doing selective light masking. This worked very well for the case mentioned, but how do we bring it to the deferred model? From what I see, my best bet is to use the stencil buffer. However, I'm not quite sure how cheap stencil buffers are. From what I read, clearing stencil buffer is expensive. Also, stencil depth is only 8bits for any modern cards (being that the other 24bits are for depth buffer). Also, for fillrate optimization, we would want to mask out the depth bound of lighting volume for each local light we render. Hence, that would take up 1 bit of the stencil buffer, leaving only 7 for light grouping.

I am not familiar with stencil buffer and do not know exactly what kind of cost it would impose on rendering. But writing and switching stencil ref value during the G-Buffer rendering phase seems scary to me. I will have to keep setting the stencil ref value for each Renderable being rendered. From what I gather, the best way for me to implement this is to add a RenderObjectListener during the G-Buffer render state, then set the stencil ref value for each renderable in the callback. Then after G-Buffer mode is done, remove the listener. However, there is one issue with this. Since the callback is dealing with Renderable, I have no way to find out the light mask for that Renderable since the light mask is set in the MovableObject. As of now, I have no clue how to handle this.
Posted by Lf3T-Hn4D
I started working on our deferred lighting / light pre-pass / semi deferred / or whatever you wanna call it renderer. Right now, all it does is just generate the appropriate G-Buffer.

As simple as it looked, it wasn't so for me since our engine have pretty complex mix of custom materials. I had to restructure the way we sort our render queues to filter out unwanted meshes from the G-Buffer pass. One draw back of our current system was that our models consist of both shaded and unshaded materials; E.G. Buildings with holographic ads. The other was the AO boxing overlay that shares vertex buffer with the mesh that is being overlayed. So in the end, I had to hijack them away during the renderableQueued() callback stage. This method feels ugly though. Unfortunately, I can't think of any other way to solve this. Especially since the problem is mostly about filtering submeshes, so there's no way to use the visibility flag in Ogre.

Nevertheless, it's done and I'm beginning to see how to build this renderer in the most optimum way possible(in Ogre). I must admit that the reason why I am working on this now is all thanks to the Ogre dudes. Without the recent added features, I probably wouldn't be working on this now. So lemme give my thanks to Noman and dark_sylinc for making this possible. :) Oh, I should also thank Google too since they technically funded Ogre's compositor improvement.

As always, I like posting some graphics since I'm a graphics person. So, without further ado, here they are:
G-Buffer normal:

G-Buffer depth:

and since the depth buffer looked so strangely satisfying, here's two more just for kicks.

Don't you think they set the post-apocalyptic mood? :-)
Posted by Lf3T-Hn4D
Liquid Rock Games and Project Aftershock. All Rights Reserved.