Creative COW SIGN IN :: SPONSORS :: ADVERTISING :: ABOUT US :: CONTACT US :: FAQ
Creative COW's LinkedIn GroupCreative COW's Facebook PageCreative COW on TwitterCreative COW's Google+ PageCreative COW on YouTube
BLOGS:My COW BlogAdobe BlogMacWorldEditingTechnologyAfter EffectsFinal CutEntertainment

Scott Simmons's Blog

COW Blogs : Scott Simmons's Blog
Share on Facebook

The iPod as a timecode slate

Originally posted on The Editblog.

 

I recently got suckered into a low budget music video. How low budget? The decision was made not to hire an audio person for playback. Without that audio person it meant there would be no audio master made for the song and no timecode slate for syncing later in post. What was the solution? Use an iPod with video and make our own timecode slate!

We dropped the audio into a timeline and then applied a timecode generator to a slug for one version of the song and color bars to another. This was done so we would know what version of the song we were working with at a glance. It was also handy to have the timecode on the iPod video screen in that we put visual cues like chorus, bridge, solo to know where in the song we were.

Timecode is always something to think about as we were shooting in 720p60 and of course the iPod doesn’t support that so sync was a bit off when the clips were brought in. But since the camera recorded the audio playback with the camera mic there was always an audio reference to help with sync. What I did was to first drop in the actual piece of video we put on the iPod into a timeline. It didn’t matter that it needed rendering as I just needed to see the numbers. I assigned the master timecode of the timeline to match. It didn’t really match since we shot at 59.94 but if you were shooting at 29.97 it would match perfectly and you could skip the next step. Then I dropped all the performance takes (there wasn’t too many of them) into a timeline, using the iPod display to get the clip nearly into sync. I then adjusted the clip to get the sync just right and then assigned each of the takes an auxiliary timecode, made a multiclip (I was using Final Cut Pro) grouping via aux timecode and you have a multiclip group with all of your takes. Voila! Instant music video, no timecode slate needed… though I still prefer a professional audio pack with a real timecode slate at if all possible.

One little gotcha that is worth noting if you are going to use an iPod with video as a poor-man’s timecode slate on your next video shoot. Be sure that you set the Burn-in-timecode to begin at 01:00:00:00 (1 hour). Besides this being the standard for program start times and how professional audio packs are made, if you have it start at a zero hour 00:00:00:00 as was done with one of our slates:

you will have problems with grouping through auxiliary timecode…. at least when using Final Cut Pro. When your timecode begins at 00:00:00:00 and you assign that auxiliary timecode to your clips, FCP will have to adjust where the clips are synced in order to get them into proper sync. When it has to move the clip backward then it moves into 23:59:59:00 timecode range and it will screw-up the grouping.

This is how properly aux coded clips are supposed to line up:

You can see the alignment of all the clips and how they fall into the proper place within the sequence. The takes that are just the bridge (like [1]-11, [2]-09, [6]-12) fall in the 3rd quarter of the song. These would have been impossible to group with the first verse ([12]-06, [13]-08, [14]-13) using just an IN or OUT point as there is no overlap. Ahhh the joy of auxiliary timecode!

This is how FCP wants to line them up if you have 00:00:00:00 auxiliary code in the clip:

As you can see that is not a proper multiclip group. The moral of this story… don’t use 00:00:00:00 timecode!

Originally posted on The Editblog.


Posted by: Scott Simmons on Jan 16, 2008 at 10:48:28 amComments (2) ipod, editing, apple, technology, final cut pro, indie film, workflow
Reply   Like  

Lots about SmoothCam

Originally post on The Editblog

smoothcam_14hrs.png

Does that say 14 hours?

yesssssssss.jpg

Yessssssssssssssssssss it does.

There’s a great thread at the Apple support forums called the SmoothCam dirty little secret about the massively huge analysis times for the magic camera stabilization tool in Final Cut Pro 6 known as SmoothCam. Yep, they are quite large. According to the FCP 6 user manual this is how SmoothCam works:

Unlike other filters in Final Cut Pro, the SmoothCam filter must analyze a clip’s entire media file before the effect can be rendered or played in real time. Using the SmoothCam filter requires two independent phases: • Motion analysis: Pixels in successive frames are analyzed to determine the direction of camera movement. Analysis data is stored on disk for use when calculating the effect. • Motion compensation: During rendering or real-time playback, the SmoothCam filter uses the motion analysis data to apply a “four-corner” transformation to each frame, compensating for camera movement.

It is indeed a 2 step process and that first step is a biggie. The manual suggests using a Quicktime Reference file to cut down on the huge analysis time. Do you really need to do that? I ran a little test on some HDV footage with my dual 2.7 G5 and here are the results.

I took a 1 minute and 35 second (01:35:26 to be exact) shot that is its own individual Quicktime file and edited a 12 second 26 frame shot into a sequence. I then exported that same 12:26 shot to a Quicktime Reference file as suggested in Chapter 22 of the manual, Using the SmoothCam Filter. The result? The near 13 second Quicktime reference clip nearly an hour to analyze:

smoothcam_53minutes.png 13 Sec. self-contained masterclip - HDV

The other 13 second shot that was part of the original one and a half minute master clip, 5 hours: smoothcam_5hrs.png 13 Sec. clip part of 1.5 minute masterclip - HDV

After setting up this little test I read another test that suggested the huge analysis times might be a result of the HDV codec and its mpeg/i frame existence. So I then exported these same clips using Apple’s ProRes 422 codec.

The 13 second self-contatined Quicktime in ProRes 422, around 36 minutes:

smoothcam_selx422_36min.png 13 Sec. self-contained masterclip - ProRes 422 (HQ)

The 13 second ProRes 422 cut that was still part of the original one and a half minute converted master clip:

smoothcam_422_5hrs.png 13 Sec. clip part of 1.5 minute masterclip - ProRes 422 (HQ)

A bit less time but not by much. HDV file sizes and data rates are quite small while the file sizes and data rates of the ProRes 422 (HQ) clips are very large. So just for comparison I exported the same clips as DV-NTSC to see the difference.

smoothcam_3min_dv.png 13 Sec. self-contained masterclip - DV

smoothcam_21min_dv.png 13 Sec. clip part of 1.5 minute masterclip - DV

So from the looks of this informal little test it seems as if the SmoothCam analysis speed has many factors that contributes to how fast or slow it can analyze a clip. HDV clips have low file size and data rates but the complicated i-frame compression seems to cause a lot more difficulty in analysis, kind of like conforming HDV for an output to tape. It’s a no-brainer that regular DV25 resolution would be fastest since it has small data rates and frame sizes. Just because we can, let’s try it with DVCPRO-HD 720 files:

smoothcam_14min_dvcpro.png 13 Sec. self-contained masterclip - DVCPRO-HD 720p60

smoothcam_hour_dvcpro.png 13 Sec. clip part of 1.5 minute masterclip - DVCPRO-HD 720p60

It makes sense that DVCPRO-HD 720 would be somewhere in the middle. What this tells me is to take the analysis time into consideration when using SmoothCam. If you can build that time into the edit it will produce amazing results. To see the “snake shot” before and after SmoothCam, or side by side, have a look at the H.264 Quicktime clips below:

snake shot WITH SmoothCam - 1.4 mb

snake shot WITHOUT SmoothCam - 2 mb

both snake shots SIDE BY SIDE - 2.6 mb

As you can see it works pretty well, kind of like the Steady Shot mode on a Sony camcorder. Of course all this can be avoided by just using a tripod!

But then what about other similar offerings from other manufactuers?

snake_all.jpg

After my comparison tests on the new SmoothCam filter in Final Cut Pro 6 I saw a post on Splice Here wondering how long Avid’s Stabilize effect would take and what the results would be. I wondered the same thing so let’s give it a try.

Avid has a couple of different ways to stabilize a clip. I chose the most simple method called Region Stabilize. Drop the effect on a clip, position the wireframe over what you want it to track, choose appropriate zoom settings and render.

avid_stab_wireframe.png

How long did it take?

avid_region_stablize.png

20 minutes. That’s a far cry from the 53 minutes of analysis that SmoothCam had to perform to stabilize the self-contained Quicktime of the 12 second clip. What were the results?

snake shot w- Avid region stabilize - 670k —- that’s ssssssssssmooth

One mention on the Apple forums was that someone uses iStabilize. They have a free demo so let’s toss them into the mix as well.

snake shot iStabilize - 1.32mb — not a bad result but the downside of a third party app is you have to export a Quicktime, perform the task, export from the app and re-edit the stable clip into your edit. That’s a lot of steps and while the results were pretty good iStablize has kind of a clumsy interface that it took several trips to the help files and several exports before I got it to export the stable shot and not the original shaky one. But it was fast, only taking a few minutes to analyze.

Want to seem them all side-by-side?

snake shot ALL side-by-side - 5.7 mb — The link to the left is a high quality H.264 link. You can also see the You Tube upload below. It’s nice to see that these tools work pretty well. Of course there are a lot of different settings that can be tweaked depending on the kind of shot and how much shake and jitter that it has. All of these examples have been applied with their default settings with the exception of iStabilize which I tweaked one slider for the most stabilization possible. There are a lot of other ways to perform this same task (After Effects, Shake, Combustion) so whatever your tool of choice these days you can get a great result, especially if you have the time.

 


Posted by: Scott Simmons on Jun 4, 2007 at 7:50:11 pmComments (1) avid, apple, final cut pro
Reply   Like  

Hands on with Avid ScriptSync

With Avid’s new version of Media Composer 2.7 they are introducing a feature called ScriptSync. The basic idea behind ScriptSync is that Avid will take an imported script, “listen” to the take, and then line the script accordingly. This idea of script-based editing comes from feature film editing and job of the script supervisor who will physically “line” the script on set so the editor will know what the takes are, what is covered, who is on and off camera for each line as well as any other information that might benefit the editor. To have an electronic lined script there in the editing application in theory gives the editor much easier access to the individual takes. You have a little picture tile for each take, a line that runs the length of the take and marks that can be double clicked to load a particular take at a specified line in the script. It’s a great idea. The down side comes in setting up that script. It can be quite a labor-intensive process to do it all manually with clicks and drags and button pressing. It’s a fine task for an assistant editor but may be just too time-consuming for the lead editor.

In comes SriptSync. It uses a kind of phonetic voice recognition to line the script for you. My first though when I heard about this was of IBM’s Via Voice software and the long process that it took to “train” it to understand one single voice and how it really wasn’t all that accurate. If that was the kind of technology used in ScriptSync, it would be a dud. But this phonetic type voice recognition is miles ahead of what something like Via Voice uses and it works quite well. I was able to do a small test with a beta version and was very impressed with its accuracy. The scene in which I used was set in a kitchen with two adults and a couple of kids. ScriptSync was accurate in the majority of the takes and was even able to put proper marks on some of the overlapping dialog. If there was an ad-lib or something not in the script then the line just didn’t get a mark. With Script-based editing you have a long vertical line that shows the length of the take and tiny horizontal marks at the dialog. ScriptSync wouldn’t make the vertical lines, I had to drag them out myself but after marking the length of the take you choose ScriptSync from the drop down menu, select a the options you want and off it goes. It’s quite speedy and as I said before, surprisingly accurate.

I think one of the best uses for ScriptSync will be for lining transcripts from on camera interview subjects. Any editor who has ever gotten a notebook filled with lots of interviews can see the benefit of having that script on-screen in front of you with the ability to click any line and have that clip appear in the source monitor at that selected line. ScriptSync can work with many different text files, not just a Hollywood formatted screenplay, as it has options to deal with how the file is formatted. Is ScriptSync going be the killer feature that gets a lot of Xpress Pros to upgrade and/or brings new people to Media Composer? Only time will tell but those who try it in 2.7 will find them with a new tool that makes life a little bit easier.


Posted by: Scott Simmons on Apr 8, 2007 at 1:06:21 pmComments (4) avid
Reply   Like  




Blog FeedRSS


FORUMSTUTORIALSFEATURESVIDEOSPODCASTSEVENTSSERVICESNEWSLETTERNEWSBLOGS

Creative COW LinkedIn Group Creative COW Facebook Page Creative COW on Twitter
© 2014 CreativeCOW.net All rights are reserved. - Privacy Policy

[Top]