A close friend of mine recently told me, in the politest way possible, that maybe it’s time to pronounce my most recent film project dead and move on. When I repeated this to my wife, she said, “I could kiss him for saying that. I’m not going to, but I could kiss him for saying that.” (sorry, man). Well, I think he’s right. Here’s what I learned from making this short 3 minute film. Or in other words, tips to producing a good short film.

The Good:

1) Storyboards created for the film helped ensure the story was understandable from a purely pictorial perspective.

2) The animatic was even more useful to estimating the pace of the story.

3) Verbally directing the acting. Speaking an actor through the emotions, as if shooting a silent film really helped get the right visuals.

4) The story works. It has flow, you can understand what the action is. That’s all good.

The Bad:

1) Create a plan view of the action for every sequence. This consists of a diagram a bit like a football play diagram. Then you can easily pick camera angles, and draw your storyboards.

2) Obtain shooting locations far in advance. Having to change all of your shots last minute due to being moved to a different location makes keeping shot continuity a real chore.

3) Be flexible during filming. Trying to cram the exact script down your actors’ throats makes for a very long day for them. Try leaving some fudge room and being accepting of different variations.

4) Have the storyboards and script on set and well organized. Being able to refer to them in an instant is really handy.

5) The actors didn’t really like acting. Well, one of them did, but the other (who I don’t think had acted before) had a tough time with the conditions on set.

6) Shoot the right pacing on set. Trying to get the timing right on set just has to be easier than attempting to fix it in post. I say after having tried to fix the timing a simple walk in post. This is where a field monitor would come in handy to review the shots immediately after filming them and check to make sure everything looked good. I may need to participate in a professional shoot to learn an effective way to do this.

The Ugly:

1) Plan out every detail of the special effects/visual effect shots. This includes a plan for obtaining the necessary equipment and rental estimates. It might even be handy to have a property master instead of doing all of the hauling myself, but that would cost more rental money. I shot the main actor “flying” without a leaf blower onset to add some “wind”. As a result, he really looks like he’s just hanging there instead of floating.

2) Use a good camera. The next camera I shoot with will need to have the ability to turn off the auto exposure feature. After wrangling with the exposure of otherwise perfectly fine shots, this has been beaten into my head.

3) Use an HD camera. The extra resolution gives me so much more latitude in recovering bad camera motion, bad focus, bad exposure, etc., if necessary, during post production.

4) Use a camera stabilizer. I’m currently planning on building a hard drive gyroscope stabilized steadicam. Should be fun!

5) Rehearse in costume. That will give a good indication of the final effect on film. Had I rehearsed the flying shot with the actor in costume, I’d have known that we needed to cut holes in the shirt to hide the rock climbing harness we used.

Yeah! Let’s make a flying robot that responds to light and heat! How about a Butterfly-ornithopter with a large wings, a slow wingbeat, and muscle wires instead of a rotary motor? I say, heck yeah!

Or at least that’s the plan. It involves muscle wire–wire that contracts when current is applied to it (technically a shape memory alloy – SMA). This combined with what I learned while making an office supply ornithopter (see "The Flying Scrooge") made me think I could make a robotic butterfly that responds to light and temperature.

Now there is already a non-flying muscle wire kit that flaps it’s wings, but I think the wing surface area is much too small. When I was creating the Flying Scrooge, I found this neat video of a butterfly-like ornithopter that flew with a surprisingly slow wingbeat, because it had two pairs of wings.

Looking at the following video of a Japanese ornithopter, I can avoid actual aeronautical math and just estimate the size and weight of the Jap ornithopter.

Hmm, looks like maybe two wingbeats per second and a wing size of about 36 inches square. That’s doable. Probably weighs less than a pound.

Now if I can get the muscle wire to quickly respond, this might work. I’d use a Lithium Polymer battery (currently the battery with the highest strength to weight ratio), or perhaps an ultra capacitor, and a chip to oscillate the output from one set of wires to the other.

Time to do some napkin math.

This is the establishing shot for my short film “Keep Your Feet on the Ground”:

Unfortunately, the consumer camcorder I used had no way to turn off the autoexposure feature of the camera, so as the camera pans down, the roof overexposes to white and the shadows under the eaves drop to black.  After a couple failed experiments, I was able to correct the shot.  Here’s the final version:

My first whack at this was just to try to color correct it.  But where the roof is exposed to white, you can’t color correct lost detail back into that area.  So since this shot was almost a nodal pan, with no action other than the camera movement, I decided I could create a panorama in Photoshop and then fake the camera move in Shake.  This is what I did.

1) I stitched together the best parts of several frames into on Panorama.  (Details on this step can be found in several tutorials across netland, so I’ll just illustrate the process with a simple, useless diagram.)

Some of the frames I used to assemble the panorama.

Some of the frames I used to assemble the panorama.

2) Next, I imported the panorama into Shake.  After fiddling with image brightness, I noticed some mistakes I made when stitching the images together, so I Quickpaint-ed them away.

3) I then added the camera movement using a Pan node.  This took some tweaking to get smooth camera movement and avoid looking off the edges of the panorama.

4) Now this isn’t a normal shot. The last frame of this pan is the first frame of a camera dolly in to the window. So the last frame of the shot couldn’t be changed from the original. I forgot that when I was making the panorama, and had painted all over the bottom part of the panorama. So I had to remake the panorama, being careful not to touch the bottom part this time.

5) The shot looked pretty good at this point, but I noticed the background looked like it should be warping a bit as the camera moved to better match the feel of the original shot. So I added a Lens Warp node. While it improved the shot, it also created several problems. It warped some of the panorama edges into view, and affected that last frame of the shot again.

6) I got around affecting the last frame by fading out the kappa value in the Lens Warp node over time:

Here you can see I faded the Lens Warp kappa value over time.

Here you can see I faded the Lens Warp kappa value over time.

7) To fix the edges coming into view caused by the Lens Warp node, I AGAIN had to remake the panorama in Photoshop, this time cloning in extra sky and bricks on the edges to make things work. (That’s the third time, for those of you not counting.)

The Completed Panorama

The Completed Panorama

8 ) Since the panorama isn’t video, the shot was missing video noise like a real video clip would have. I used a Film Grain node to analyze the sky on the original clip, replicate that noise pattern, and applied it to my panorama pan. Then I decided it didn’t look right, spent about an hour manually fiddling with the values, compared my manual film grain to the original clip, and decided the Film Grain analysis was way better than my poor attempts. Trust the Film Grain node analysis.

9) Finally, just to make SURE that the final frame matched up with the original footage, I actually spliced the panorama shot and the original clip together, making the original clip visible on the last frame. Ops OK. That means everything looked good, yo.

Here's the complete shake tree I used to create the shot.

Here's the complete shake tree I used to create the shot.

During my playful experiments with Fourier Transforms, I discovered that you can overlay an image on top of a Fourier transformed image, and freely convert back and forth from Fourier to regular space. Each transform would descramble the previously FFT’d image. So in short, you can hide the FFT of one image in another image.

Here’s the message I want to hide:

The Secret Message!

The Secret Message!

I’ve put it in the bottom half of the picture so it will still be intelligible after being transformed. Here’s the FFT of my secret message:

The Fourier Transform of My Secret Message

The Fourier Transform of My Secret Message

My first test overlaid the FFT over a color wheel, but it’s still a bit obvious:

The Secret Message overlaid on a test image.

The Secret Message overlaid on a test image.

If we scroll the FFT over to the corners and scale down the brightness, it’s much harder to spot when overlaid with the zucchini-in-a-bottle image:

The secret message offset to hide it better, overlaid on a normal photo.

The secret message offset to hide it better, overlaid on a normal photo.

You can barely spot it. Some information is lost when the image is saved in the .tif format, but after we scale the brightness back up, we get this:

The actual use decoded message (rescued from a zucchini-in-a-bottle)

The actual use decoded message (rescued from a zucchini-in-a-bottle)

Here’s my whole shake tree for both the test image and the final masterfully masterfool zucchini-hiding-a-secret image. Cheers!

Lossless Shake Message Tree

Lossless Shake Message Tree

Masterfool Zucchini Secret Message Shake Tree

Masterfool Zucchini Secret Message Shake Tree

Dr. Timothy Kuklo

Dr. Timothy Kuklo

I have had back problems for about a decade, and a while back I finally needed back surgery.  My usual doctor referred me to a surgeon who was a bit messy and had a ratty carpet and tacky artwork in his office.  When I asked his office assitants who in their office was a perfectionist, they said, “all of our doctors are perfectionists.”  Right.  Then this non-perfectionist surgeon told me he’d have to cut away part of my spine to then cut off a small part of a spinal disc.  Time for a second opinion.
After stumbling around the internet, I found a “doctor finder” website that listed the number of medical whitepapers a doctor had published.  “Ahah!”, I thought.  “I’ll just look for the doc with the most scholarly papers published to his name!”  And that turned out to be one Timothy Kuklo, with 67 whitepapers published.
I went to see him.  Dr. Kuklo’s office was immaculate, his staff exceedingly professional.  He had a not a single hair out of place.  In short, he looked like the kind of doctor you wanted operating on your back.  And when he told me he didn’t have to cut off any bone to reach the disc, that the incision would be at most two centimeters, and the recovery easy, I was sold.
Indeed the surgery was successful, and my recovery ridiculously short and easy.  I never even used a single one of my Oxycodone pills (eat your heart out Mr. Limbaugh).  All was well!  For a while . . . sadly, eight months later I suffered a sports injury.  My primary care physician again recommended surgery and (again) referred me to the non-perfectionist surgeon.  Right.  “Hah!”  I said to myself.  “I’ll just schedule a consultation with Dr. Timothy Kuklo again!”
So I called up his doctor’s group and they said he was no longer with them and they didn’t know where he’d gone.  No problem.  I’ll just Google him.  I did so, and what did I find, but a New York Times story on a$800,000 case of fraud and scholarly dishonesty involving one Dr. Timothy Kuklo being paid to falsify the results of a whitepaper whilst forging the signatures of four other doctors!  Wait, this couldn’t be MY Dr. Kuklo, right?!  Alas, he was the very same man.
Now the fraud was so ineptly carried out that I think Dr. Kuklo was probably a fairly honest man prior to this indiscretion.  No, lifelong scoundrels are sure to “misremember” things or fudge the facts, depending on what your definition of the word “is” is.  No I think Dr. Kuklo probably was what he seemed to be at first: an excellent surgeon.  I think he just went off the deep end when he thought of all that money offered him.  Sad really.
This all brought one memory screaming back to me:  When I first met Dr. Kuklo, I explained how I’d chosen him because of his 67 published white papers.  He chuckled and responded, “Well . . . that’s not always a good way to choose a doctor.”  I guess he was right.
Now, my back had indeed healed well, so I wasn’t really worried about the quality of work he’d previously performed for me.  However, one doesn’t want a man guilty of lying in exchange for $800,000 from a pharmaceutical company to cut on oneself a second time.  I need a perfectionist who is also a decent guy.
I think my next surgeon should have a fancy car AND homely wife.  I’ll let you know how well that turns out.

Four-month-old toothbrush

Four-month-old toothbrush

Recently, I discovered a way to make my toothbrush last several times longer than usual. I think the recommended retirement age for a toothbrush is 3 months, but my current brush is going on 4 months and looks almost new. Yes, the end is nowhere in sight for this toothbrush!
I’ve found two things that help achieve a longevous toothbrush:

1) Rinse your toothbrush thoroughly (for about ten seconds) after brushing.

2) Hold your toothbrush with only two fingers in a light grip.

I discovered these longevity techniques by accident about seven months ago. That’s when I decided I didn’t like the natural buildup of tooth-pasty residue on my toothbrush handles. To combat that, I started rinsing it much longer after brushing. This not only kept the residue away, but I noticed the bristles stayed together (meaning they didn’t splay outwards) for much, much longer! Then I discoverd the second important aspect of toothbrush longevity: light pressure. After my wife borrowed my toothbrush for a week and scrubbed the enamel off her teeth, my toothbrush looked completely worn out. I hadn’t realized pressure was important before, because I was already brushing lightly (after an oral hygenist chastised me for brushing too hard, resulting in my early onset receding gumlines). So brush lightly!


(“Bravia” commercial clip Copyright SONY.)

Temporal Median Filters work like magic for removing small unwanted objects from video . . . like hairs and dust. I’ve often wished I had a Temporal Median Filter to remove unwanted noise from my videos. I always figured that Shake was flexible enough to do the job with just the available nodes, but I couldn’t figure out how to do the job until recently. I’ve made a macro out of my work which you can download here. Technical details follow.

I tried and failed with the LayerX node because it can’t do pixel-wise comparisons, and the TimeX node can only work with one FileIn node. But the ColorX node can do the job . . . if you can feed it the right information. Here’s what you do:

1) Create 3 FileIn nodes all with the same video source.
2) Edit the Timing>Timeshift parameters of one FileIn node -1, and another to 1.
3) Reorder the three video sources so that each only has the red channel. For video source 2, the red channel is reordered to the green channel, and for source 3, the red channel is reorder to the blue channel.
4) Combine all three video streams with a couple of IAdd nodes.
5) Add a ColorX node with this formula in the Red channel:
r+g+b-Max3(r,g,b)-Min3(r,g,b)

Red Temporal Median Filtering Shake Node Tree

Red Temporal Median Filtering Shake Node Tree

6) Repeat steps 3-5 for the green and blue channels.
7) Layer all the data together with some reorder and add nodes.

Complete Temporal Median Filtering Shake Node Tree

Complete Temporal Median Filtering Shake Node Tree

And if you don’t care to do all that, here’s a picture of the simple tree you’ll need to make after using my macro:

Temporal Median Filter Shake Node Tree

Temporal Median Filter Shake Node Tree

You can download the complete MedianTime macro here: MedianTime Filter at Creative Crash

Simulated Infrared Image of My Face

Simulated Infrared Image of My Face


Simulated IR Facepaint

Simulated IR Facepaint


Rondofo wrote about his efforts to make himself unrecognizeable to infrared (IR) security cameras with bright IR LEDs sewn into his hoodie. That project failed miserably. But it got me thinking . . . it occurred to me that makeup absorbent of IR would appear black to IR cameras. It could be makeup, facepaint, lotion, or even some kind of bug spray that just happens to be IR absorbent. It would be even better if the facepaint or whatever were transparent or skin colored in normal light. Just as long as the your features are obscured enough to make you unrecognizeable to night security cameras. Now to find such an IR absorbant substance. There is some IR facepaint available to the military and law enforcement. But I think if we dig around enough we could find some other product that just happens to have IR characteristics.

What to do:
1. Get an IR filter for your digital camera, or remove one that is already present on most digital cameras and get a visible light filter.
2. Go to the store and film the various makeups, lotions, bug sprays, etc. to see which (if any) are black to IR.
OR
Research natural chemicals that are absorptive of near IR.
3. Obtain makeup, natural substances, or other chemicals you’ve researched.
4. Apply substance to your face.
5. Test with IR filtered camera.

Email me pics if you try this!

P.S. It occurred to me that you wouldn’t have to paint your entire face to obscure your features; you could make patterns if you like! So I made a few “artist’s renderings” in Photoshop.

Simulated IR Camoflage Facepaint

Simulated IR Camoflage Facepaint


Simulated IR Facepaint Clown

Simulated IR Facepaint Clown

Messier Object 68

Messier Object 68

I was reading through my Digital Image Processing book (2nd edition) about something called Fourier transforms. Without explaining the math, suffice it to say that any 2d image can be transformed into the Fourier domain. When you do a Fourier transform on a normal image, you get what looks like a cluster of stars. So me being me, I thought, what do you get if you do a Fourier transform on a picture of stars? SECRET MESSAGES FROM SPACE!?

How to decode messages from space.
1. Find a good candidate photo from space. Should have a bright spot in the very center and be 512×512 pixels.
2. Convert the photo to floating point precision.
3. Perform an inverse Fourier transform (I used a shake plugin from pixelmaina), using the same image as both the real and mathematical “imaginary” portion (you know, the square root of -1) of the Fourier transform.
4. Center, Scroll, or otherwise (Filter>Other>Offset in Photoshop) the resulting image so that the brightest part is in the center.
5. Interpret the results.

Messier Object 68 in Fourier space

Messier Object 68 in Fourier space


Messier Object 68 in Fourier space after recentering

Messier Object 68 in Fourier space after recentering



Results:*
Messier Object 5 in Fourier Space

Messier Object 5 in Fourier Space


Messier object 5 (M5) clearly shows that somewhere in the universe, a noisy television exists.
Messier Object 30 in Fourier Space

Messier Object 30 in Fourier Space


M30 likewise.
Messier Object 68 in Fourier space after recentering

Messier Object 68 in Fourier space after recentering


M68 shows a planet with 50 moons and a race of sentient dogs.

*Final interpretations will be subjected to peer review as of Aug 2090.

Areas for Further Research:
The three Messier objects I have thus far decoded clearly show a low signal-to-noise ratio. In analyzing other Fourier images, I note that bright pixel clusters are extremely rare. Rather, the image is comprised of individual pixels of varying brightnesses(eses). Thus, my photos of stars are not true Fourier images. For better results, I must use a diagram of the stars instead of a photo, with each star being a maximum of one pixel in diameter.

3d Dot Heroes Screenshot

3d Dot Heroes Screenshot

About five years ago I saw Mister Ghrib 4 : Carl’s Memories by synj, which reminded me of a 3D Super Mario Brothers 3.  What if you could take a 3d camera down into the 2d video game world so that you could see each individual cube that made up all the characters and the entire game world?  Wouldn’t that look cool?  Last year I stumbled onto Metroid Cubed, a full voxel port of the original NES Metroid game.  This was great, but it was still not quite what I had imagined.  I wanted the user to be able to zoom the camera in and see the lines between the blocks.
Well, today I did another search and found two great looking games, Fez, and 3D Game Dot Heroes.  3d Game Dot Heroes is exactly what I had in mind of an old-school 2d game brought to 3d. You can see the individual blocks of every Zelda and Final Fantasy parody! Sadly, this is slated for a November release in Japan only! (Curs-ed Language Barrierrrr!!) The other game, Fez, is a really neat concept, a sort of compressed 3d game world. It’s 2d, except while you are changing viewing angles which is when it briefly becomes 3d, then the world is squished back to 2d. This allows some really nifty spacial puzzles, making the game look like huge fun to play. It’s as tough to explain as Narbacular Drop, so check out the trailer.
Inbetween I experimented with making my own voxel Mario (way bigger pain than it was worth), attempted to program my own game in OpenGL (another big pain), and saw some other videos that reminded me of this cool concept. Fez and 3d Dot Heroes are way better!