video

Xcoders Seattle: How to App Video

These are the “show notes” from my How to App Video talk at Seattle Xcoders, on May 21, 2014 at 7:00 pm.

Slides, AV Script Format, and Videos

  • My Xcoders talk was recorded and now available on (and embedded above)!
  • My Slide Deck — not sure how helpful this is without the talk, but there it is. These are actually the slides from the previous presentation of this talk at Renaissance 2014, but very little has changed, aside from MouseTools, and including both Mac and iOS instead of just iOS.
  • AV Format Markdown CSS (on GitHub) to use in BBEdit, Marked, and Ulysses
  • If you’re printing out the script from BBEdit, and want the shading to print behind the video side of the script, check out the instructions in this post.
  • At the moment, BBEdit doesn’t obey the CSS page break styles when printing, to avoid weird widow/orphan issues. It seems it’s an Apple Web View bug, and they are working on a solution for that.
  • Final OmniOutliner View Settings “Support Short” video from The Omni Group
  • Final video from MartianCraft
  • Final spec spot, and a on how we did some of the special effects.

Tools

More Bits

Renaissance 2014 — How to App Video

These are the “show notes” from my half of the Master Video talk at Renaissance 2014, on January 21, 2014 at 10:30 am.

Slides, AV Script Format, and Videos

Tools

More Bits

Vertical Horizon

Ever since the iPhone first started shooting video, people have decried the use of the vertical orientation. Why would you do that? It looks so horrible! It’s unnatural! Hang on a moment while I pass judgement on you.

Stop it.

Let’s take a look at the history of film aspect ratios for a moment. Sure, the first film format was 4:3, just like our old TV sets — slightly wider than it is tall. In fact, TV cribbed the 4:3 ratio from film, and it wasn’t until TV started sucking away some of the film audience that the movies started to get wider and wider and wider.

The point being, aspect ratio is an artistic choice, and mostly a gimmick to get people back in theaters. None of those aspect ratios are “right” — not even 16:9, which was a compromise between many ratios for an acceptable film “fit” when TV stole widescreen back for itself (and pushed the movies into another 3D frenzy, which is a rant for another day). Even 9:16 (the iPhone’s vertical video ratio) is just another choice in a long line of choices.

And why shouldn’t you shoot video vertically? Apple’s own ads show people chatting on FaceTime with the camera held vertically. Our faces our vertical. There are tall buildings, and kids coming down playground slides. I argue that, sometimes, it’s a really good fit.

Most of the arguments against vertical video seem to boil down to one of two things. One is some pseudo-scientific mumbo-jumbo about how our eyes are set horizontally in our heads, so our natural field of vision is wider than it is tall, and we should obey that restraint. (Yes, art is all about obeying natural restraints, and conforming to convention.) The other is an argument that the way that we share video now, via YouTube and AirPlay-ing to our AppleTVs, demands that wide ratio to fit the screen. This latter theory has some merit, but I would argue that the video sharing sites should accommodate multiple aspect ratios in the way they present the videos, instead of letterboxing things inside a widescreen frame. (At least Vimeo and Flickr seem to handle this properly.)

That leads us to Horizon, a new app that uses the accelerometer in the phone to detect the angle of the phone while it’s shooting video, and automatically crop the video to a level 16:9 horizontal image.

This is super clever, and certainly fills a need — sometimes you do want perfectly level horizontal video, damn the resolution (the crop in vertical mode has only 32% of the resolution of the full image). Probably most people just want a video that looks nice when they play it on their TV, or share it on YouTube. This will do that, and quite nicely.

The more interesting thing to me, is how it enables a unique interface for the zoom function on the phone. Now you can use the angle of the phone to control the crop, instead of clumsily sliding your finger on the screen while you’re trying to hold the phone still.

It’s all about using the camera in creative and unique ways — There is no one “right” way to shoot video. Just a plethora of interesting decisions.

Fingers, Virtual and Otherwise

fingers_virtual_and_otherwise

Recently, a few developers were having a discussion on ADN about a promo video for a new iPad app. Uli Kusterer raised the point that the video was missing something: fingers. It was hard for him to tell what was a tap, and what was something that was merely happening as normal in the app. Since the app in question is a game with a lot of animation and things popping in and out, it’s easy to see how things might get confusing.

Fingers are a really important part of an iOS app demonstration video. There are definitely a few ways to make that happen (and there was some speculation about how it might be done better in that ADN conversation). I’ll get to some techniques you might use later, but first, perhaps we should talk about why it’s so important.

Why Fingers?

When you take a screencast of a Mac, unless you’re doing something really fancy, your cursor appears in the video. The cursor is also known in certain circles at the pointer because, well, it points to things. You can even wave it around and gesture and point to things that you’re not even going to click on. It’s the digital extension of that digit on your hand that you point to things with all day, every day. Always pointing at things, you are.

The cool thing about pointing and gesturing at stuff is that the viewer’s eyes find it really useful, so they can anticipate. The human eye and brain are a pretty good team when it comes to following the motion of object, and when they have that, it’s a lot easier to follow your cursor from one action to the next, than if your cursor, say, suddenly jumped from one side of the screen to the other, and you had to search around to figure out where it had gone.

However, when you take a straight screencast of an iPhone or iPad app, your finger doesn’t show up on the screen. You can’t gesture to things. The viewer’s eye can’t track where you’re going next, so it’s really easy to get lost and frustrated, trying to figure out what’s going on.

Giving Your App the Finger

Now, when I talk about fingers in a video, I don’t necessarily mean a literal human finger. It could be a simple circle representing a finger pad. The important thing is that there’s some way for your viewer to see where you’re going, and be able to follow along.

Let’s take a look at some options for getting that in there. I’ll try to go from least to most complicated, and give you an idea of the pros and cons of each technique. I don’t think any one technique will be right for every app, so it’s good to know your options.

Run your app in the simulator and use SimFinger

  • Maybe you think SimFinger is creaky and old, but you know what? It still does what it needs to do, and does it well.
  • SimFinger replaces your Mac’s cursor with a simulated finger pad that changes from convex to concave when you click, representing a tap with your virtual finger.
  • It also gives you iPhone and iPad frames to put around your simulator screen, if you’re into that kind of thing.
  • Con: It’s the simulator, so it may not look and act exactly the same as the device, or may not be viable if you have features that don’t work well in the simulator. It will also be hard to use this for games with crazy fast multi-tapping action. Or any multi-tapping action at all, really. Your mouse is really only equivalent to one finger, or maybe two if you use the pinch mirroring features in the simulator.

Run your app in the simulator and use ScreenFlow

  • You can also do this kind of cursor replacement with ScreenFlow, and maybe you’re already using ScreenFlow to record anyway. In fact there’s a setting for the mouse pointer called Circle – Light that looks remarkably like Ye Olde SimFinger.
  • You can adjust the transparency of the finger cursor, so that you can always see what’s underneath.
  • Con: Still the simulator.

Shoot a video of a human finger actually using the live app on a device.

  • You capture the video all in one go, so the shoot itself seems pretty easy.
  • Con: The shoot is actually terribly difficult because the color temperature in the environment is very hard to match to the color temperature of the display. And even if you have a fancy bank of LED lights where you can dial up different light temps, it’s still really difficult to make the live app on the device look decent because the contrast and brightness of your live scene is never going to match the screen very well.
  • Also, there’s a big LED light bank shining on your device, so: GLARE. No, you can’t keep the light off the screen. Your finger needs it.
  • Also also, the video you shoot of a screen is never going to look as clean and sharp as a raw screen recording.
  • Tip: You can maybe get away with it, if you want to go for the silhouette effect. Very dim lighting or just accent lights on the real world, and adjust your camera for best exposure of the screen.
  • Con: Your finger, and maybe the rest of your hand, obscures the screen, so it might still be hard to see what’s going on.

AirPlay to your Mac and record the screen with TouchPosé

  • TouchPosé is a set of classes you can add to your app to make semi-transparent circles appear when you touch the screen.
  • Con: It only registers touches, so I’m not sure I can recommend it. There is nothing for the eye to track from one tap to the next, so I don’t think it’s much of an improvement. Perhaps if you’re doing a live demo or a presentation somewhere on a projector, and you can explain to people what you’re doing in more detail, but in a fast-paced demo video, it’s easy to lose track of the little tap circles, especially if it’s a game or other app where random circles popping up could well be part of the app’s animation.
  • Caveat: I haven’t used this myself — only seen it in other folks’ videos. Perhaps there’s an option to customize it in such a way that would make it more clear?

Use TouchPosé to create a skeleton for the video, and animate your own finger.

  • You could add an animation on top of the TouchPosé style video, so that you can help the viewer anticipate. The circles that appear let you know when a tap happened, and give you timings for when you need to animate your pointer overlay into that position.
  • The animated pointer could be anything — matching the TouchPosé circle exactly, just a simple SimFinger-style dot, or any other element you could move around. It could even be a cut-out photograph of a hand with a pointing finger, and you could make it semi-transparent so it doesn’t obscure the rest of the screen.
  • Con: A lot of work. Probably requires learning After Effects or Motion if you’re not already a video guy like me.

Shoot with green screen on the device and record the app separately; sync them up in editing and composite

  • Looks great if you do it right. You can balance the colors and luminance between the real shoot and the screencast, and create something that, while it never really exists that perfectly in nature, makes for a beautiful video.
  • Cons: Toil. It’s a royal pain in the ass to pull off. You need to figure out where your taps will be, and record them so that they sync up with the video. Not too bad if you’re just cutting away for a “real world” demo moment or two, but if you’re doing a full-blown app demo, it can be deadly.
  • A green screen on the device is hard to balance when you’re shooting, to get a good, clean key. You’ve got to get it bright enough that you can pull a good key from the green, but not so bright that the green spills onto the demonstration finger and half of your finger disappears along with the green.
  • You might consider just tapping on a black screen, and rotoscoping your hand to create a matte, but that can take a long time and look unnatural if you’re not skilled at it.
  • We did both of these techniques when I worked on the videos for Briefs recently. I’ll talk about the process in more detail in my next post.

Shoot with an opaque green screen overlay on the device running the actual app

  • Peter Hosey and Mike Lee came up with this idea during the aforementioned ADN discussion. I haven’t tried it, but I think it’s pretty darn clever.
  • The real trick is, what would you make the overlay out of? It needs to be opaque enough to give you a nice clean green, but thin enough that you can transmit touches to the device through the overlay.
  • Jose Vazquez suggested including a touch-transmitting green UIView on top of the real views of the app, when viewed on the device, but removing that green for the AirPlay output that you would record, so then you wouldn’t need anything physical.
  • If you’re using a physical green material, it also might need to be thin enough, if you’re filming the live action off-axis, to not show any significant thickness, or edge, that would need to be removed in compositing. However, I think Mike and Peter were thinking it would be shot dead-on, and the fingers simply composited over the screencast — not composited back into the live action, like we did for Briefs.
  • You would then record both the camera shot of the live fingers and the app’s screen video (via AirPlay) simultaneously, and composite them together. You could even make the fingers semi-transparent to show the full app screen while you demo the app.

All interesting ideas, with varying degrees of preparation, difficulty, and results. What method will you use on your next app video? I look forward to hearing your thoughts.

Video and You

Video_and_You_title.001

Several people have encouraged me recently, since I have feet in both the video production and Mac & iOS development camps, to blog about video production for developers.

On September 26, 2009, I gave a Blitz Talk at the C4[3] developer conference entitled Video and You, intended to give developers some quick guidelines and tips about developing promotional videos for their apps.

A lot has changed since then, but please enjoy this PDF version of the talk and slides as something small to tide you over while I work up some more modern musings on the subject.

HDV Workflow

Quick word of advice.

If you’re ever editing anything shot on HDV: NEVER EVER capture low-rez DV proxies and expect to recapture in HD clean at the end. Yes, I know the HDV decks have that hand-dandy feature to downconvert to DV over firewire. DON’T DO IT. NONE of these HDV cameras seem to record clean time code, and it will never match back exactly. If you HAVE to do it via proxies (and I can’t think of a compelling reason why), crash dub all your source tapes with clean TC to something sensible first, like HDCAM or DVCproHD. Of course, then, you’re adding a generation of compression.

Better yet, just capture the HDV as native HDV to begin with. It’s only about the same data rate as DV. There’s really no reason not to — the native HDV data is the best your footage is ever going to look. If you’re worried about rendering issues, I suggest editing in a ProRes timeline in Final Cut Pro. The HDV plays as a realtime preview, no problem, and all your renders go to ProRes, so no crunchy graphics or chroma resolution issues, and your final master QuickTime gets rendered out as ProRes. Yay!

If you really need to edit in another format because your corporate overlords demand it, capture over SDI, or do a transcode before you start editing. But then, have a load of drive space available and edit in your final format. Trying to match back timecode to an HDV master is just asking for a world of hurt. Believe me, I know.

Enough ranting for now. Back to eye-matching HDV…