Killzone Shadow Fall is one of the PS4’s debut titles, and it will serve to be a showcase of just what the PlayStation 4 is capable of. The game will boast vastly improved textures, huge environments and amazing lighting effects. IN our previous analysis of Killzone Shadow Fall, we took a look at Guerilla Games’ own Post Mortem pdf of the graphics, and what they thought of the PS4 hardware. The lighting in Killzone Shadow Fall however, is what really sets the game apart from its predecessors, and Guerilla found it so important that they’d created another 100+ page PDF presentation to show off how powerful their engine (and the PS4) are.
In the previous analysis, we know that the Geometry pass of Killzone Shadow Fall takes up a huge amount of the PS4’s GPU time. Guerilla are the first to testify that this isn’t the biggest contributor to the improved graphics, and indeed call it “incremental image quality” improvements. The real stars here are the lighting effects, range of various materials in the scene and a host of other engine improvements.
HDR (High Dynamic Range Lighting) has been around for a long time, even back in 1990 it was being used for driving simulations to improve the realism. But it wasn’t until 1997 that we gamers got our first real taste of the technology. It appeared in Riven: The Sequel To Mist. Then, in 2003 Valve wowed at E3 when they released a demo movie of their Source Engine rendering a beautiful looking city landscape, all lit with the power of HDR. It was a year later though, in 2004 that Valve then announced Half-Life 2: Lost Coast. Meanwhile, not to be outdone, Epic Games shown off their Unreal Engine 3. Now of course, things are different, and virtually every major title with impressive graphics touts High Dynamic Range Lighting. So, what exactly is HDR?
The whole purpose of HDR is to improve the contrast ratios in images, allowing much better preservation of details – particularly in dark or lighter areas. This basically allows details to exist, rather than just being ‘clipped’ to either black or white, and merely blending in with the lightest or darkest parts of the image. Nvidia have a little summary of this “bright things can be really bright, dark things can be really dark, and details can be seen in both”
This is especially fantastic for water based reflections, or for other reflections which don’t equal to a 1 to 1 ratio in just how bright the original source is, compared to the amount of light that it would reflect. Think about how the sun looks on a body of water, imagine a small lake – the water isn’t reflecting all of the suns brightness back right?
HDR has improved with every release of a Shader Model (DX11 supports Shader Model 11), which allows the compression of HDR textures with little to no loss in quality. This type of efficiency reduces both the memory imprint, and the bandwidth use of the technique – therefore making the HDR ‘cheaper’ to implement.
Physical Based lighting:
In their PDF, Guerrilla break the Physical Based Lighting system down into two distinct areas:
Physically Based Shading Model: Responsible for surface response to incoming light depending on various surface physical properties
Physically Based Lights: Responsible for light flux calculation in the scene depending on various lights with physical properties
This isn’t new, lighting engines have been evolving some time in this direction. Unreal Engine 4 in particular has incredible lighting (which we’ll discuss in a moment). This technique is light years ahead of the previous method to lighting. In-game objects (this could be a huge number of things in the game environment) would have some of their lighting predefined upon their creation. There were a number of issues with this, mostly that it increased the amount of work for artists, especially if that object would be used in a number of different scenes. Or, if the lighting in the game world would suddenly change (for example, say in a horror game the lighting suddenly dims). This however is now changing for the better. The modern way is that the artist would select from a huge range of different materials (for example, copper, stone, various cloths and so on) and they would interact much more realistically to a light source. Then, there is the image based lighting which will feature heavily in Shadow Fall – more on Image Based lighting here
Guerilla Games admitted this was a huge undertaking and learning curve. And in the PDF they had the following bullet points which help illustrate just how much of a shift this away from how they were used to doing things: 1. Big step for the studio. 2 Artists had to adapt. 3 Training and workshops. 4. Production and Quality Win.
This allows the artists to be given a HUGE amount of control compared to what they were used to. Time to say goodbye to point lights. Another of their messages in the PDF, and serves to illustrate how the studio were switching to area lights for Killzone: Shadow Fall
Unreal Engine 4 – How does its lighting compare to Killzone Shadow Fall:
Much talk has been going on over Unreal Engine 4, and its real time lighting systems. SVOGI (Sparse Voxel Octree Global Illumination – click for Nvidia link) was originally implemented into the Unreal Engine 4 demo that we saw, but it appears that this is no longer the case, and Epic have ‘cut’ SVOGI and are instead going to be using the older LightMass technology. “Why did you cut it?” is what many gamers and developers are screaming across the internet, and unfortunately, it seems where we are in terms of raw hardware is just not enough to get the technique working on games. Epic had hoped that both the Playstation 4 and XBox One (AKA Durango back then) were going to provide more GPU grunt than they did. It’d created the Unreal Engine Samaritan, which was running on a single GTX 680 GPU from Nvidia. This card provides just over 3GFLOPS of power, and the Samaritan demo was reported to use around 2.5GFLOPS.
Therefore, because consoles simply don’t have the GPU power they wanted, and even PC GPU’s aren’t powerful enough in the mid to low range, they decided that the technique was too ‘expensive’. This can be seen from a reply in a post on Epic Games’ own forum:
Hey guys, rendering team lead from Epic here.
Fully dynamic lighting and precomputed lighting are just two tools in our UE4 toolbox. We have games being made like Fortnite that are using fully dynamic lighting, no lighting build times, and the game has full flexibility to change what it desires at runtime. In the case of Fortnite, this is used to great effect with building and harvesting of resources. We don’t yet have a solution for dynamic GI in the fully dynamic lighting path, this is something we hope to address in the future.
On the other hand, using precomputed lighting where you don’t need dynamicness frees up a lot of processing power. The infiltrator demo that we released at GDC leverages this heavily. In short: we would have had to scale down Infiltrator massively without precomputing some of the lighting. There are over 1000 lights in some of the scenes, and about half of those cast shadows. Dynamic shadows have a huge cost on graphics hardware, but precomputed shadows are very cheap. Our general purpose reflection method (Reflection Environment) also relies on pre-captured probes. By having the general purpose reflection method be cost efficient, we were able to.
Now there are some workflow costs to precomputed lighting (creating lightmap UVs, build time), and this is something we hope to improve significantly in the future.
Precomputed lighting is also really useful for scaling down, to low end PC, 60fps, mobile, etc.
In summary: UE4 supports multiple tiers of lighting options, and games can use what suits them best. Fully dynamic lighting provides maximum interactivity, editor workflow and game design flexibility, precomputed lighting provides maximum quality and performance.
Specular vs diffused light:
I did cover this in my video, but I just wanted to go over it once more (with images) to show off the differences more clearly. Diffused light is when a light hits a diffused surface, and the light then scatters into many different directions. If you think about it, this is logical, as virtually no surfaces are completely clean, without pits, bumps or textures (along with of course, other factors). Specular light is almost a ‘carbon copy’ of this light. The light will shine directly in proportion to the angle at the light source. A simple example could be thought of as a flash light while in a dark room in a horror game, and you stumble across a mirror in a bathroom.
Imagine that the light being reflected off the mirror is ‘specular’ – closely following the angle of your character as you move. However, the light which bounces off the tiles (or even bounced from the tiles which has already been reflected off the mirror) will likely scatter / diffuse. For more on this, check out this link
Shadow Fall is clearly the shape of things to come, and as the developers Guerilla Games even said in their developers commentary, many of these techniques they have been able to use previously. It is the combination of the techniques that makes the game look so visually impressive over the previous generation. The Playstation 4 is certainly going to be one impressive machine, and while some developers clearly had hoped for a little more GPU grunt under the hood, there is no getting away from the clear advancements that will be made over the next few years. Killzone Shadow Fall (along with Infamous Second Son) will be enough of a reason for me to buy the Playstation 4 ASAP.
When you consider just how the PS3’s visuals improved over time, with such talented developers working on a system like the PS4 imagine what we’ll be seeing. I never expected to see such impressive visuals on the PS3 as the Last of Us, and yet, it is so.