Showing posts with label Graphics. Show all posts
Showing posts with label Graphics. Show all posts

Tuesday, August 3, 2010

QR Code hacks: modifying and altering for artistic fun

QR Codes are a quick way to get information with a camera. (Think modern evolution of a UPC Bar Code). Smart phones will scan them for you to quickly get information into your phone, such as a URL to browse, a business card, etc.

I was inspired by the BBC QR Code that intentionally distorts the image to insert the letters "BBC" in the middle. QR Codes have redundant information to offer error correction, and I am amused at the different ways you can abuse this.

First, this is what a QR Code for "http://beautifulpixels.blogspot.com/" looks like unmodified:
"Pure" QR Code:
Now, the experiments:

Simple pixelated distortion in the center (similar to BBC code):
Any image could have just been overlaid, just like plopping a sticker in the middle, but doing it in the pixel grid feels nice.

The idea I like most is using a collage of images that form the QR Code. Where the collage gets the pattern right, there's no need for error correction. But there's slack for the collage to be off a bit. It's very tedious to do, so I only have this partial version, but I'm certain with patience a fully collaged version could be made:

Collage:
The collage might be automated too. Do an image search and then paste in images "fit" to the QR Code target.

There are several ways to play with color, as the scanners will just care about the luminance.

Pixel grid colorization:

Gradient:

High resolution image colorization:

There are several ways to play with the QR Code as conceptually just "dots".

Dots:

Dots merging:

Dots as images:

And on we go, a few more ideas: 
Rounded corners:

Perspective:
I wasn't able to change the perspective to something as extreme as I thought, this was about as far as I could get it.

I also didn't have the patience to create several "natural media" QR Codes. But I thought sand art, pebbles, leaves, etc, done with real world materials would be great. Here's a synthetic Go board (the gaps maintain a valid game state, ;):

Go Board:

And I should show failures too. I had high hopes for a pen sketch, but it doesn't scan. :( I'm convinced it could work, though I think a more even contrast would need to be used.

Ink Sketch:

Except for the last, all of these scanned correctly for me using the ZXing Barcode Scanner 3.3 on my Android phone. Collage clip art included elements from flicker users greekadman, bombardier. 

[Edit:] A Link from the comments that shows a good 'natural media' example:

Sunday, May 23, 2010

HTML Canvas Lines Toy



Karl Hillesland and I toyed around with the Canvas HTML/Javascript API this weekend. We remade an old program I noodled around with in highschool. The heart of it is just drawing lines between curves made with trig functions (the "offset" constants are continuously changing every frame via their own sin waves):

 var x1 = Math.sin(2*Math.PI*(t+offsetInner1)*periods1x+offsetOuter1);
 var y1 = Math.cos(2*Math.PI*(t+offsetInner2)*periods1y+offsetOuter2);
 var x2 = Math.sin(2*Math.PI*(t+offsetInner3)*periods2x+offsetOuter3);
 var y2 = Math.cos(2*Math.PI*(t+offsetInner4)*periods2y+offsetOuter4);
 // line from (x1,y1) to (x2,y2)
If you're using a browser that supports Canvas, you'll see it above. Internet Explorer doesn't support it as I'm writing this - so use Chrome or Firefox or ... wait for Microsoft to catch up.

It's not exactly "done", but who knows if we'll clean it up. ;) If you'd like to experiment with it, here's a version with debug mode sliders you can drag around:

lines-07.html

  • Changing the periods (last 4 options) results in very different effects.
  • Use Chrome for slider input support - sliders are nice. Firefox just displays input boxes ;(.
  • Resize your browser's width for the desired aspect ratio -- that page auto-letterboxes.
For sad people with no capable browser, here's a still image:

Wednesday, February 17, 2010

Working on Chrome OS at Google

Many have been curious what I'll be working on at Google, including myself for a while. Google is interesting in that you are not allocated to a team until just before your first day. You accept the job because you're a good fit for Google in general, not just one team.

I'm going to work on Chrome OS. There's a lot to do for the initial launch, but long term my goal is to help make it a great platform for rich graphical apps and games too.

Whoa, you say, I thought you worked on high performance games? Yes, high performance 3D games will be running inside browsers, with security and performance delivered via Native Client.

Monday, January 11, 2010

Radial 10 bit Gray code - Tutorial

How I made the Radial 10 bit Gray code image procedurally in Photoshop:

  1. Create a 10 bit gray code image (width 10, height 1024). I did this by hand, it's a little tedious. But there is a method: you duplicate the image vertically and then draw a line from the mid points of each tree.
    Then, expand it to a square image:
  2. Transform it radially (Filter, Distort, Polar Coordinates (need to first rotate the image 90 degrees)):
  3. Create some Gaussian blurs of the image, with a few similar large blurs:
  4. Set the blend mode to Difference between 2 of the blur layers:
  5. Add adjustment layers to remap the range, add a gradient, shift the hue, etc:

  6. Add another blur with difference blend mode, and we're done!


I worked in 16 bit mode since there was banding when differencing the similar Gaussian blurs.

    Wednesday, November 25, 2009

    Anamorphosis Gamebryo LightSpeed Logo

    We all love Anamorphosis, and have seen it done wonderfully before ([edit] even spectacularly). Well, I've done a few simple things around the office, here's one I just did:

    Vimeo and YouTube videos.




    That was done with painter's masking tape, with the help of Cat, in about 2.5 hours.

    Here's an earlier project, the old Gamebryo logo done with post-it notes:



    Post-it notes were handy after a late night in the office and no prep. The masking tape is obviously much better. Both were done by setting up an image on my laptop and using a projector to cast it into the room. The perspective for both is just in a doorway, to help guide viewers to the right position. Both also are cast in a way that the image is heavily distorted as soon as you move slightly away from the right point.

    Share links if you've done similar projects, or know of good ones. ;)

    [Edit:]
    Falice Varini is the link you want to follow! See e.g. this.

    Wednesday, October 14, 2009

    Gamma Correct Lighting, On The Moon!

    When I explain gamma correct lighting to people, sometimes they look at images and aren't certain which is supposed to be better.

    E.g. in GPU Gems 3, The Importance of Being Linear two spheres are shown, similar to these:

    Two spheres lit by a directional light. Which has gamma correct math?


    A small thought occurred to me: there's an object people are familiar with that will help them to understand, the moon:
    Two spheres lit by a directional light, compared with an image of the moon.
    Which has gamma correct math?


    Still don't see it.. yes, it's subtle... shall we make it obvious with a gradient?


    Yes, it is but one example, there are many more. But it's a good basic visual example.

    If you'd like to read more on gamma correct rendering:
    [EDIT: See the comments for some nice thoughts from people who've spent more time on this.

    Naty points out another good example in Real Time Rendering.]

    Thanks to "ComputerHotline" on openphoto.net for the image of the moon.

    Wednesday, August 5, 2009

    A Light Field and Microphone Array Teleconference Idea


    Paul Mecklenburg and I were chatting, and the thought of really cheap cell phone cameras led us to day dreaming about a teleconference system.

    Line up a dense array of cheap CCD sensors and microphones into a video conferncing wall you're projecting onto. Now you've got a light field, which permits rich virtual camera re-imaging (movement, rotation, zooming, depth of field). The microphones can be used to triangulate sound sources. Sources from e.g. table locations can be dampened (see "hush zone" in picture). Sources distant from mics can have an audio boost to account for volume fall off (see "audio boost"). The speaking locations can also be used to drive an automatic virtual camera which focuses on subjects doing the speaking.

    Thursday, February 19, 2009

    dAb 3D Painting Research Licensed

    [Edit 2010-03-15: A new project at Microsoft has been announced that looks a lot like dAb, Gustav]

    In 2000 I collaborated with Bill Baxter on a 3D paint brush system that was presented at Siggraph 2001. Less talk, more video:


    dAb: Interactive Haptic Painting with 3D Virtual Brushes (Siggraph01)

    Bill continued the line of research for his PhD, and you can see some great improvements to the paint model and brush simulation:


    IMPaSTo: A Realistic Model for Paint (NPAR04)


    A Versatile Interactive 3D Brush Model (PG2004)

    Over the years, we've had a few large companies inquire about licensing the research. I'm excited to say that we've closed a package deal for several systems! Perhaps you'll see this technology in consumer applications in a few years.

    For more details, visit the dAb webpage.

    Monday, January 26, 2009

    Submitting Our Fails (or, how I love the Error Shader)


    I pestered Michael Noland to submit a few images from Emergent to I Get Your Fail, which if you haven't subscribed to you should. ;)

    This one is my fav. Several years ago Tim Preston wrote the "error shader", for any situation where a renderer can't bind a shader... and we haven't been able to wipe the smirk off his face yet.

    This particular scene is amusing with some error shader, and some missing lighting.

    Thursday, December 4, 2008

    Jeff Russell Depth Of Field example

    Depth of field came up on the gdalgorithms list. Jeff Russell caught the thread and replied with some screen shots from his brute force O(n^2*m^2) blur, with nice circles of confusion. I'm posting it here to make a few comments:



    Sure, there are the typical DOF artifacts (halos, alpha object discontinuities, ...), but:
    • Top Left: Check out the trees, they look "right". The small highlights on leaves, or of sky through leaves, has that distinctive look that Gaussian blurs don't capture.
    • Center Left: Most of the scene is mush, but the enemies show distinctive disk highlights on points and lines.

    Sunday, November 23, 2008

    Motivating Depth of Field using bokeh in games

    I've posted before on Higher Fidelity depth of Field effects in games. I believe that current depth of field effects in games fall short of delivering the same cinematic emotion as movies and TV. The reason distills down to the commonplace use of compute-efficient separable Gaussian blur. More expensive methods can reproduce bokeh with crisp circle of confusion shapes.

    I've realized in casual conversation, many people can't easily recall how the technique is commonly used in television and movies. And, they often overlook how common it is.

    So, in this post I've collected a bunch of examples from movies, and whipped up some mock images from game screen shots.

    This is all just motivation. ;) Ideas for fixing the problem in a practical way is for another post.

    Cinema Examples

    A few simple shots, showcasing bright point lights, and specular highlights:



    Similar to the above, but with some edge highlights blurred as well. Look at the horizontal edges blurred in the background on the right of the image:


    The Dark Knight made extensive use of the effect for the entire film. The circles of confusion often had an oval shape due to the Panavision anamorphic lens used (compressing the image on film to an aspect ratio different than that of the screen). Also, note the specular highlights from the henchman's head and waist:


    These circles are larger than in many of the others, and also show the non-uniform characteristics of the disks:


    Finally, a subtle image. The amount of blur is minimal, but still distinctive on the lights in the background. Even though slight, delivering this effect will be yet another one of those required level of polish for AAA games in the future:



    Mock Images
    I've doctored up some game screen shots to show what is missing. They're quick and dirty, made in Photoshop with "lens blur". In a high dynamic range rendering pipeline, results would be much better, since the distinctive shapes we're looking for are results of very high contrast between highlights and the rest of the background scene.

    First, a Mass Effect scene that would have worked well. I quickly added several points of bright light to trigger the effect since the background didn't include small point lights:

    Before:


    After:


    And here marketing screen shot for Unreal Tournament. I only generated a mask and applied the blurs, since the image had good contrast to begin with:

    Original:


    Lens blur:


    Gaussian Blur (common in games today, poorer result):


    And, a zoomed section contrasting Gaussian to Lens Blur:


    Keep your eyes open as you watch TV, or movies, and you'll notice more and more how common this effect is.

    Pay attention in games, and you'll notice that we're currently delivering the concept of "out of focus". That's sufficient to draw attention to near, middle, or background. But, games today are not capturing the artistic & emotional feeling that comes with a strong use of the effect.

    I have a few prototyping ideas for delivering a higher quality depth of field, which I hope to have time to try out soon, but may likely take a while. As new hardware brings more FLOPS, it's also possible that the O(n^2 * m^2) naive approach may just be used soon enough (as was already done in Lost Planet on DirectX10).

    (Next post, Jeff Russell Depth of Field example comments)

    Update: Nice technique writeup in 3DMark11

    Thursday, October 16, 2008

    Link dump: 3D Sketching, 4k intros with Distance Fields, Level of Detail, and Input Director

    It's been a while since I've dumped some links. If you haven't already noticed these, take a look:

    I love sketch
    Check the video, scroll to 5 minutes to see curve sketching in 3D, that just makes it look too easy.

    ILoveSketch from Seok-Hyung Bae on Vimeo.

    Rendering with Displacement Surfaces
    Iñigo Quilez is a great demo-maker. He's recently been making some interesting images in 4kb executables. Shown here is "organix".

    At nVision this year he gave a nice presentation discussing graphics in tiny executables, and displacement surface raymarching.

    Level of Detail: Blog you should read
    If you're not subscribed to Level of Detail, let me call it out for you. Jeremy Shopf has some great posts, such as
    Keepin it low res
    and
    Non-interleaved deferred shading of interleaved sample patterns

    Input Director: Software KVM
    This year I started regularly running multiple machines on my desktop, including my laptop. While doing some CUDA development I had was using another machine across the room, with an HDTV I could see it from 10 ft. I needed something to simplify input.

    I used synergy for a while, but it was a bit buggy. Tim Preston suggested Input Director, a great solution if you're running only Windows. Some key features:
    • Works before you log in, so you can log into a machine from another.
    • Has an alias for CTRL-ALT-DEL, so you can lock other machines from your master.
    • Easier config than synergy
    • Less bugs with being in "stuck" keyboard modifier states