Much of Chapter one is a complete copy/paste of the first book. The TOC is better organized and it is apparent that there will be original writing here. However, the first chapter was a pretty big disappointment. I'm hopeful, from browsing the TOC, that this fact won't determine how the rest of the book goes. Overall, the book appears to be better organized, which is certainly a positive. More as I go - stay tuned.
I can't believe this much time has passed since my last blog entry here. But no one has bugged me about it for a while either, so the old adage rings true - out of sight, out of mind. That said, production has slowed up a bit so I'm back to working on our Final Cut Server implementation.
Our network engineer and I have been working to get our SAN running properly before adding Final Cut Server to the mix. He set up a test bed and was hard at work for about six weeks working out the kinks in our system. He was able to consolidate six volumes into three and make them much faster and more reliable.
With that in check, we're moving onto our next phase of a four year project. I have a boatload of notes, notecards and books. Just today, I received the new Final Cut Server 1.5 book by Drew Tucker. It came in a bit lighter than had been sold by Peachpit Press. So, I'm a bit disappointed in that aspect. It was sold as 360 pages, but weighs in at 271 pages including the index. I was really hoping for something comprehensive. As I go through the book, I hope it will help me to bring together all of my sources including notes from my class.
One curveball I hadn't expected was my findings after using Telestream Episode. I find it far faster than Compressor - even in a cluster environment as we've used for a while. This recent discovery could have me re-thinking one of our main purposes for FCServer - creating proxy files and distributable media assets. I think I can get around that with some creative watch folder actions. The problem will be in keeping the FCServer database up to date. Once I get the workflow redesigned in OmniGraffle, I'm sure the solution will become more clear.
I'm thinking of creating some screen captures as I go through this process. We'll see how that goes and I'll have to perhaps create a walkthrough for our internal training here. If I can get authorization, I might be able to release those to the public should there be any interest.
I'm starting on the book tonight with Lesson 1: Overview and Installation. Although I've already gone through these steps, I want to see if Drew has included something I might have missed. Ciao for now.
I've learned a bit about how we need to configure XSAN. And I've learned a bit about the differences between going by the book and what works in the real world. Our IT team has seen it all and they are all over this project now. We're at the start of sandboxing new topologies for XSAN 2. Since we only fairly recently started moving to XSAN 2 from 1.4, there are some feature differences that we needed to sync up on - namely better virtualization and multi-san. It looks like multi-san promises to be great help for us. If we can take our six volumes and make them into two SANs, we can use our MDCs in tandem each as it's own primary controller and as the other's secondary. It seems this will give us another level of redundancy as well as allow us to sandbox new SAN configurations. Downsides are that we can only pool luns within the same SAN - meaning the aggregate throughput of the system is potentially quite a bit less than in one storage pool. I think if we plan it out well, each SAN could have both high performance and older systems for a good mix of each on each. However, if we instead configure our older systems all together in one SAN, they could be in a pool also designated as the "test bed".
Suffice it to say, we're going to be experimenting for a few weeks or even months prior to our full rollout of our newly configured SAN. I'm going to reinstall Final Cut Server in the meantime and start testing it on the original (slower) SAN volumes so I can begin to customize the metadata fields, groups and sets we'll need. I'll have to be aware of what's happening with the devices since our current volumes will be destroyed and reconstructed as part of the new SAN. It's going to be tricky, so we're planning each step in as much detail as we can think of.
We entertained the thought of hiring an XSAN consultant, but we have a tenacious bunch of engineers and techs who pride themselves on learning on the fly. Thankfully, this is all augmentation of an existing workflow and we're releasing the new system in parallel with the old. So we'll continue to work on older projects as we always have with the new ones being tested in production as we go.
We're hedging our bets by adding a capture system in our studio which will capture to a DAS RAID alongside our existing system which captures to the SAN. So if our SAN goes down, our studio won't need to miss a beat.
Our XSAN volumes are set up all wrong, unfortunately. Apparently the person put in charge of our XSAN implementation has had no training and now we're faced with some pretty major roadblocks. I had previously purchased a book on XSAN and he's studying it now. I also found some very helpful info on the xsanity wiki page which has some sample diagrams of some install examples. We have a few serious flaws in our layout that will cause us quite a bit of pain. However, if we don't suffer through the process of reconfiguring XSAN, we will certainly be on a path for failure with Final Cut Server. It is a bandwidth hog if being used properly. And we cannot afford many hiccups in production.
First off, each RAID is it's own volume and contains and controls its own metadata. I can see from the examples, that we should have built our raids (aka LUNs) so they were grouped into same-sized sets (aka Pools) over different controllers. XSAN treats the same-sized LUNs in a Pool as disks and writes to them in parallel - emulating the same effect as a type zero striped RAID. As they are configured, we are getting about 120MB-150MB/sec. If properly configured, we should be getting 2-3 times that performance.
I think what we'll try to do is to move as much media as we can to as few volumes as possible. Then, when we get our new RAID in a few weeks, build a new single volume using the techniques outlined in the XSAN documentation. I expect this entire process will take about a week in all, once the new system is here. The order was made for the same sized drives and all, so at least it will match what we already have. I guess we need to be sure to earmark part of each raid for metadata. According to Apple's documentation, we need about one gig per terabyte. We are using way more than that on our current configuration. So between that change plus consolidating the drives, we stand to regain a major amount of usable space.
These changes will be a great deal of work, but will net us much better performance and about one extra terabyte of space. On top of that, I'm not about to build an entirely new database and add tons of files to a much less than optimal infrastructure. I take the perspective that you cannot build a house atop shifting foundation.
Another flaw that I'm unsure of is that our Macs are not on an isolated LAN. I have seen folks argue, and the documentation supports, that SAN systems need to be on a LAN separate from the rest of the company. I don't see any particular cases of what this recommendation is based upon. And our MIS director seems to think such a construction would be excess. I admit that I cannot argue decisively on this issue. If anyone has some documentation, other than what I mentioned here, please do pass it along.
Being that we are mixing our Final Cut Server into an existing workflow, we have a couple of infrastructure issues to go over tomorrow. I'll be looking into these in even finer detail this evening. What I learned in the class is that Final Cut Server does not play well with the so-called magic triangle. This is a commonly used method by admins to sync up Open Directory (Mac server) groups with Active Directory (Windows server) groups. The bottom line is that we have to choose one method of user authentication or the other - we cannot hybrid the two. Why some admins like to hybrid the two is still a bit of a mystery to me. Not to get bogged down by that detail, we'll be debating the pros and cons of each of the two methods.
I like the idea of not duplicating work. So, I like the thought of using Active Directory since our tech support crew already maintains that system for our Windows network. Since 98% of our net work consists of Windows computers, it stands to reason that the Macs assimilate to the existing system and not the other way around. Make sense? I think so. If we go with Open Directory, we'd have to first learn about it, train others and maintain a completely different authentication system. That means additional resources would have to be allocated to make sure the AD and OD systems are synchronized manually - yikes!
The downsides to using AD, as I understand them, are:
1. This is a very new feature and not yet stable. Certainly, we'll have to test early and often, which we would have done anyway.
2. Each Mac has to be configured for kerberos authentication. From what I can tell, this doesn't seem to be a big deal at all and can probably be automated for new client systems.
3. Additional configuration of the server. Again, this looks like a fairly simple process and would only have to be done once, in theory.
4. Each Windows client would have to be modified to include a kerberos authentication file. This can likely be added to a login script for all Windows users within particular groups.
All in all, I think we could do worse. At least in the cases I've read about, AD does work, but it's on the buggy side and takes extra set up to work in the first place. With all of the benefits of AD authentication, it makes sense to move forward with that. Barring any major concerns or objections, I think this is the direction we'll end up taking for group and user permissions on FCSvr.
The second major topic will be our XSAN configuration. Currently, it's not quite set up for optimal use. Each of our RAIDs is a separate volume - five volumes in all. Now, I'm no XSAN admin, but I can see from our book that we need to separate the functions of XSAN first and customize the volume and affinities from that. If we configure the volumes right, we should end up with much higher performance on the biggest need, our capture end edit functions. Then, we should be able to delegate the slower systems to volumes that we allocate to archival and stock media functions. And finally a volume that would be optimized for sharing FCP project files. I honestly don't know how that conversation will go and I have no definite ability to make good decisions for the details. Again, I'll have to get more homework done this evening.
Wish me luck!
In the beginning, there was chaos and confusion. We have so many variables that I find myself referring back to my original workflow diagram, circa 1999. The basic high-level concepts of post-production are virtually the same anywhere you go. The devil, of course, is in the details. That's where things can easily get convoluted very quickly - especially in a team environment. In all, we have five (sometimes six) editors including myself. (I'm a working supervisor.)
The trick right now is to focus on the limitations and construction of Final Cut Server, XSAN and Windows network. Two weeks ago I attended a three day course on Final Cut Server called "FCSvr 201" at MacSpecialist in Chicago. The first day was 25% on end user experience and 75% on what not to do and the major bugs were present. Some of the bugs were fixed in FCSvr 1.5, the help menu to start. But what I really learned about implementing FCSvr was that you can't have too many details on your own workflow needs. Luckily, for me, I have already done much of the legwork required to get the proper metadata set up.
The basics of FCSvr break down to metadata, both how it is related and what kind is present. At the highest level of the food chain, we have the Metadata Set. The set consists of many groupings of fields. That grouping, the level just below a Set, is called a Group. The Group houses the final level, the granular level of the metadata - fields. In order to get this all constructed correctly, you have to first look at all of the bits of info you need to have and track. Then you have to think of how to group that info and finally in which set that group belongs. In essence, you have to look at the big picture, then cut it with a jigsaw, then piece it all back together again. I have done database work before, with Filemaker and Access, but nothing to this granular level that is afforded by Final Cut Server.
The one thing I can say about the beginning of implementing such a powerful and vast installation is that you can't plan too much. Accounting for unknowable variables can't be done well. Thus, the most effective approach is to get as much feedback as you can to uncover all contingencies anyone can think of to date. One thing you can count on is that the details will change down the line. So, knowledge of production changes down the road would help tremendously.
I learned in my class that Final Cut Server comes stock with over 1200 fields. Before we started designing our case-study solution in class, I thought that was way more than anyone would need. Once involved, I discovered that there were key bits of info missing for our particular needs for Pretend Co. Even with all those fields, we still needed more than the stock 1200-odd fields. Most of our needed fields were already in FCSvr, though. Caveat: only the ones labeled as custom fields could be manipulated for our use. Some fields are populated by file-embedded metadata and cannot be made to store other data types. More on custom fields later.
Final Cut Server, as a custom solution, requires custom configuration. I am compelled to offer an analogy here about custom car tuning. You can buy a stock Mini Cooper for under $20k USD and it will drive and ride just fine. If you want that car to perform really well under harsh driving conditions, you want it customized. It'll cost you another $10k, but it will perform better, look sharper and handle corners tighter. Installing the server only nets you a stock car. If you want that car to drive and look certain ways, you must customize. With that, I'm going to be rolling up my sleeves and getting my hands dirty. I'll write about it here for others to learn from my mistakes and successes.