Sean Barrett put together a nice tech demo and presentation at GDC 2008 on virtual textures. He has shared this online:
Id Software has been making press over the last year regarding Megatexture. It's likely that Sean's method covers the primary technique being employed there.
Some quick thoughts from me:
- Gain access to very large texture in your world. Texture is so large it won't fit into VRAM.
- In fact, it might not fit into RAM -- you may page it in from disk
- or procedurally create it (in which case you're basically caching expensive procedural textures -- e.g. there could be an interesting combination with Allegorithmic's product)
- Shader hassle, & performance hit
- Need to find out what textures (including MIP level) you need
- Easier for things like terrain (can compute conservative guesses)
- Non terrain you can do horrid things such as render the whole scene and read it on CPU.
- doing that at lower resolution is a perf win but risky sampling
- Could handle smaller objects by conservative estimates too, but it would be very lossy.
- Makes you wish GPU vendors just supported this in the case of really large CPU textures -- or exposed API out through DX/OGL for callbacks to main app in procedural cases
brief note: Major Kudos to Sean for recording and publishing his hour long presentation, and sharing tech demo source code. Most professionals don't have the time or energy to do that.