The Playstation 2 had excellent fill rate, enabling the over use of blur in many games. Combined with masking or thresholding, this was were we saw the mass emergence of (cheap) high dynamic range effects in games. Though, the "high" in this case wasn't very high, and most of the effect was just the blur.
Those blurs were typically a separable Gaussian blur, for performance reasons. The separable blur is just a horizontal then vertical blur, O(n) instead of O(n^2) for a general case 2D blur.
A Gaussian blur, however, is not the effect created when a point light is out of focus in a camera. The proper convolution varies a bit from camera to camera, but here is an example taken with my SLR:
This shape is primarily a constant intensity circle, with diffusion ringing near the edges. High quality cinematic cameras will also often show the aperture edges, however in my camera there was too much flaring to see these. Pay attention in a movie, however, and you're likely to see octagons instead of circles.
To illustrate a cross section of the intensity (hey, why not) I've done the following in Photoshop:
Unwrapped the image with polar coordinates:
Smoothed the image with a strong horizontal blur:
Added a gradient:
Applied a threshold:
The result is a graph where the center of the disk is at the top, and the outer edge at the bottom. The distance along the X axis indicates the intensity.
The diffraction ringing near the edge is clearly visible, with the center of the disk being approximately a constant intensity.
There is still some noise towards the top of the profile, which is from the center of the image. That is due to the noise that appeared there, and was not blurred by the horizontal blur. (It was stretched out when unwrapping the polar coordinates).
To render this effect on ~2008 era GPUs is a heavy weight operation. However, some games have done so, e.g. Lost Planet's port to DirectX. (Beyond 3D article) They had extra processing power to spend when porting from the Xbox 360 to DirectX 10 cards such as the GeForce 8800.
Instead of horizontally & vertically blurring the image, for each point a triangle is rendered to a small area with its intensity modulating a texture. For a single bright pixel, the result is a copy of the convolution texture around that point. (Read their article for details on scene segmentation and geometry shader usage.)
Here I've highlighted the effect in two Lost Planet images from www.4gamer.net (1 and 2). The top images show the standard Guassian blur, the bottom images show the texture effect.
Click image for larger view.
Compare the blurry sparks on top
to the hexagon out of focus sparks on bottom.
Compare the blurry sparks on top
to the hexagon out of focus sparks on bottom.
Note they were able to increase the size of the blur with the texture effect, because it remained cinematic. That size of a Gaussian blur would just look muddy.
Try it at home: In Photoshop try opening up a image you have with small point lights, or just make one with a black background and a few small white dots. Give it a go with
Filter / Blur / Gaussian Blur and
Filter / Blur / Lens Blur.
Conclusion: Crisp cinematic depth of field effects require a distinctive convolution kernel. Real time graphics will move beyond separable Gaussian blurs.
Update: More discussion here: motivating-depth-of-field-using-bokeh
Update: Nice technique writeup in 3DMark11
Update: Nice technique writeup in 3DMark11
Good post. Can't wait for that in Gamebryo ;)
ReplyDeleteOne thing. You probably didn't see the aperture shape because there was low light and the aperture was wide open.
This is called bokeh, see
ReplyDeletehttp://en.wikipedia.org/wiki/Bokeh
http://www.kenrockwell.com/tech/bokeh.htm
http://www.neilblevins.com/cg_education/faking_bokeh/faking_bokeh.htm
Thanks ;) Hey, Anonymous, re: aperture shape:
ReplyDeleteI tried a few times to get the aperture to show up. I did try increasing the f-stop / shrinking the aperture. That caused tons of flaring. You could see the aperture slightly, but it was mostly washed away in blur.
Perhaps I should have tried again in a scene with less contrast - instead of a dark room and bright point light.
Hey Vincent, I sure hope we move beyond Gaussian blurs too. How about a blur kernel that gives extra weight to samples above a brightness threshold? The weight would then be a function of both the distance from the center of the kernel and the sample's brightness. This could fake the distinctive look of real bokeh of bright dots. Afterall, there's no need to preserve the energy of the image - the screenshots are often brighter than the originals.
ReplyDeleteDOF Pro is a great looking photoshop plugin for depth of field which I noticed reading a post on c0de517e brainstorming implementations for DOF with forground/background blurring and bokeh.
ReplyDeleteAh... sweet Bokeh. I was inspired by Tri-Ace's GDC talk this year to implement proper post processing in our core tech at Incinerator Studios. Tri-Ace did a lot of work in their Star Ocean 4 game and now are putting all that research into End of Eternity. Gorgeous looking game!
ReplyDelete