Sunday, November 23, 2008

Motivating Depth of Field using bokeh in games

I've posted before on Higher Fidelity depth of Field effects in games. I believe that current depth of field effects in games fall short of delivering the same cinematic emotion as movies and TV. The reason distills down to the commonplace use of compute-efficient separable Gaussian blur. More expensive methods can reproduce bokeh with crisp circle of confusion shapes.

I've realized in casual conversation, many people can't easily recall how the technique is commonly used in television and movies. And, they often overlook how common it is.

So, in this post I've collected a bunch of examples from movies, and whipped up some mock images from game screen shots.

This is all just motivation. ;) Ideas for fixing the problem in a practical way is for another post.

Cinema Examples

A few simple shots, showcasing bright point lights, and specular highlights:



Similar to the above, but with some edge highlights blurred as well. Look at the horizontal edges blurred in the background on the right of the image:


The Dark Knight made extensive use of the effect for the entire film. The circles of confusion often had an oval shape due to the Panavision anamorphic lens used (compressing the image on film to an aspect ratio different than that of the screen). Also, note the specular highlights from the henchman's head and waist:


These circles are larger than in many of the others, and also show the non-uniform characteristics of the disks:


Finally, a subtle image. The amount of blur is minimal, but still distinctive on the lights in the background. Even though slight, delivering this effect will be yet another one of those required level of polish for AAA games in the future:



Mock Images
I've doctored up some game screen shots to show what is missing. They're quick and dirty, made in Photoshop with "lens blur". In a high dynamic range rendering pipeline, results would be much better, since the distinctive shapes we're looking for are results of very high contrast between highlights and the rest of the background scene.

First, a Mass Effect scene that would have worked well. I quickly added several points of bright light to trigger the effect since the background didn't include small point lights:

Before:


After:


And here marketing screen shot for Unreal Tournament. I only generated a mask and applied the blurs, since the image had good contrast to begin with:

Original:


Lens blur:


Gaussian Blur (common in games today, poorer result):


And, a zoomed section contrasting Gaussian to Lens Blur:


Keep your eyes open as you watch TV, or movies, and you'll notice more and more how common this effect is.

Pay attention in games, and you'll notice that we're currently delivering the concept of "out of focus". That's sufficient to draw attention to near, middle, or background. But, games today are not capturing the artistic & emotional feeling that comes with a strong use of the effect.

I have a few prototyping ideas for delivering a higher quality depth of field, which I hope to have time to try out soon, but may likely take a while. As new hardware brings more FLOPS, it's also possible that the O(n^2 * m^2) naive approach may just be used soon enough (as was already done in Lost Planet on DirectX10).

(Next post, Jeff Russell Depth of Field example comments)

Update: Nice technique writeup in 3DMark11

2 comments:

  1. I love Bokeh in photography and so I have also thought of rendering Bokeh in real-time. But last time I did not really implement it.

    IMHO, Only forward approach (splatting Bokeh texture scaled by CoC radius of each pixel) can create good effect.

    However, the performance will drop for large CoC because of fillrate. I think O(n^2 * m^2) is really large. That is equal to m^2 multiples of fill-rate of a full-screen. E.g. To support maximum m=64 size of Bokeh, it needs 4096x fill-rate! So I think that brute force is not possible soon.

    I have an idea to accelerate this. My idea is to downsample the framebuffer to smaller ones. Larger CoC radius use smaller versions of frame buffer for splatting. This reduces the quality of continuous Bokeh (e.g. a line of pixels generates a capsule), but it should looks OK for showing isolated beautiful Bokeh shapes (individual bright points).

    Another photography effect, vignetting, is simple but quite few seeing in Games. But vignetting is literally appeared in *every* photograph/video.

    ReplyDelete
  2. Good summary there. Both of the problem and some nice examples.

    I did a thesis about post-processing with specific interest in DoF, using both GPU shaders and GPGPU techniques (read CUDA).

    I tried several implementations both from the gaming industry and animated movies and personally I liked the method that Pixar presents in this paper most:
    http://graphics.pixar.com/library/DepthOfField/

    Mostly because of the large amount of blur that can be achieved. Also, with reasonable amount of work you can handle the foreground. Unfortunately the implementation was a little messy.

    I never found enough material about the Lost Planet implementation to try that one.

    ReplyDelete