It will take a few seconds to launch, as it generates the needed imagery.
I’ve had this idea in my head for a while to do realtime depth of field in Processing. This is my first attempt at it. The sketch is all in 2d, but it’s simulating a 3d depth of field: “Dust motes” float about the screen. If they are “in focus”, they are rendered as simple pin-pricks. But as they leave the focal zone, either coming too close to the camera, or too far, they start to blur out.
Conceptually, this is done by pre-rendering out a bunch of images stored in off-screen buffers. When the sketch runs, particles are created, which obviously know their position in space. Based on their distance from the camera, and the current depth of field settings, they choose the appropriate image (sprite) to render, and draw it to the screen. This, combined with some scaling effects provides the results. The user can control both the focal distance and focal zone with the mouse. I also sort the ArrayList of motes to make sure they’re drawn in the correct order, based on distance to camera.
Next step is to move it into 3d. But not too bad for a couple days work while I had a cold, and was home from work.