Our XSAN volumes are set up all wrong, unfortunately. Apparently the person put in charge of our XSAN implementation has had no training and now we're faced with some pretty major roadblocks. I had previously purchased a book on XSAN and he's studying it now. I also found some very helpful info on the xsanity wiki page which has some sample diagrams of some install examples. We have a few serious flaws in our layout that will cause us quite a bit of pain. However, if we don't suffer through the process of reconfiguring XSAN, we will certainly be on a path for failure with Final Cut Server. It is a bandwidth hog if being used properly. And we cannot afford many hiccups in production.
First off, each RAID is it's own volume and contains and controls its own metadata. I can see from the examples, that we should have built our raids (aka LUNs) so they were grouped into same-sized sets (aka Pools) over different controllers. XSAN treats the same-sized LUNs in a Pool as disks and writes to them in parallel - emulating the same effect as a type zero striped RAID. As they are configured, we are getting about 120MB-150MB/sec. If properly configured, we should be getting 2-3 times that performance.
I think what we'll try to do is to move as much media as we can to as few volumes as possible. Then, when we get our new RAID in a few weeks, build a new single volume using the techniques outlined in the XSAN documentation. I expect this entire process will take about a week in all, once the new system is here. The order was made for the same sized drives and all, so at least it will match what we already have. I guess we need to be sure to earmark part of each raid for metadata. According to Apple's documentation, we need about one gig per terabyte. We are using way more than that on our current configuration. So between that change plus consolidating the drives, we stand to regain a major amount of usable space.
These changes will be a great deal of work, but will net us much better performance and about one extra terabyte of space. On top of that, I'm not about to build an entirely new database and add tons of files to a much less than optimal infrastructure. I take the perspective that you cannot build a house atop shifting foundation.
Another flaw that I'm unsure of is that our Macs are not on an isolated LAN. I have seen folks argue, and the documentation supports, that SAN systems need to be on a LAN separate from the rest of the company. I don't see any particular cases of what this recommendation is based upon. And our MIS director seems to think such a construction would be excess. I admit that I cannot argue decisively on this issue. If anyone has some documentation, other than what I mentioned here, please do pass it along.
Being that we are mixing our Final Cut Server into an existing workflow, we have a couple of infrastructure issues to go over tomorrow. I'll be looking into these in even finer detail this evening. What I learned in the class is that Final Cut Server does not play well with the so-called magic triangle. This is a commonly used method by admins to sync up Open Directory (Mac server) groups with Active Directory (Windows server) groups. The bottom line is that we have to choose one method of user authentication or the other - we cannot hybrid the two. Why some admins like to hybrid the two is still a bit of a mystery to me. Not to get bogged down by that detail, we'll be debating the pros and cons of each of the two methods.
I like the idea of not duplicating work. So, I like the thought of using Active Directory since our tech support crew already maintains that system for our Windows network. Since 98% of our net work consists of Windows computers, it stands to reason that the Macs assimilate to the existing system and not the other way around. Make sense? I think so. If we go with Open Directory, we'd have to first learn about it, train others and maintain a completely different authentication system. That means additional resources would have to be allocated to make sure the AD and OD systems are synchronized manually - yikes!
The downsides to using AD, as I understand them, are:
1. This is a very new feature and not yet stable. Certainly, we'll have to test early and often, which we would have done anyway.
2. Each Mac has to be configured for kerberos authentication. From what I can tell, this doesn't seem to be a big deal at all and can probably be automated for new client systems.
3. Additional configuration of the server. Again, this looks like a fairly simple process and would only have to be done once, in theory.
4. Each Windows client would have to be modified to include a kerberos authentication file. This can likely be added to a login script for all Windows users within particular groups.
All in all, I think we could do worse. At least in the cases I've read about, AD does work, but it's on the buggy side and takes extra set up to work in the first place. With all of the benefits of AD authentication, it makes sense to move forward with that. Barring any major concerns or objections, I think this is the direction we'll end up taking for group and user permissions on FCSvr.
The second major topic will be our XSAN configuration. Currently, it's not quite set up for optimal use. Each of our RAIDs is a separate volume - five volumes in all. Now, I'm no XSAN admin, but I can see from our book that we need to separate the functions of XSAN first and customize the volume and affinities from that. If we configure the volumes right, we should end up with much higher performance on the biggest need, our capture end edit functions. Then, we should be able to delegate the slower systems to volumes that we allocate to archival and stock media functions. And finally a volume that would be optimized for sharing FCP project files. I honestly don't know how that conversation will go and I have no definite ability to make good decisions for the details. Again, I'll have to get more homework done this evening.
Wish me luck!
In the beginning, there was chaos and confusion. We have so many variables that I find myself referring back to my original workflow diagram, circa 1999. The basic high-level concepts of post-production are virtually the same anywhere you go. The devil, of course, is in the details. That's where things can easily get convoluted very quickly - especially in a team environment. In all, we have five (sometimes six) editors including myself. (I'm a working supervisor.)
The trick right now is to focus on the limitations and construction of Final Cut Server, XSAN and Windows network. Two weeks ago I attended a three day course on Final Cut Server called "FCSvr 201" at MacSpecialist in Chicago. The first day was 25% on end user experience and 75% on what not to do and the major bugs were present. Some of the bugs were fixed in FCSvr 1.5, the help menu to start. But what I really learned about implementing FCSvr was that you can't have too many details on your own workflow needs. Luckily, for me, I have already done much of the legwork required to get the proper metadata set up.
The basics of FCSvr break down to metadata, both how it is related and what kind is present. At the highest level of the food chain, we have the Metadata Set. The set consists of many groupings of fields. That grouping, the level just below a Set, is called a Group. The Group houses the final level, the granular level of the metadata - fields. In order to get this all constructed correctly, you have to first look at all of the bits of info you need to have and track. Then you have to think of how to group that info and finally in which set that group belongs. In essence, you have to look at the big picture, then cut it with a jigsaw, then piece it all back together again. I have done database work before, with Filemaker and Access, but nothing to this granular level that is afforded by Final Cut Server.
The one thing I can say about the beginning of implementing such a powerful and vast installation is that you can't plan too much. Accounting for unknowable variables can't be done well. Thus, the most effective approach is to get as much feedback as you can to uncover all contingencies anyone can think of to date. One thing you can count on is that the details will change down the line. So, knowledge of production changes down the road would help tremendously.
I learned in my class that Final Cut Server comes stock with over 1200 fields. Before we started designing our case-study solution in class, I thought that was way more than anyone would need. Once involved, I discovered that there were key bits of info missing for our particular needs for Pretend Co. Even with all those fields, we still needed more than the stock 1200-odd fields. Most of our needed fields were already in FCSvr, though. Caveat: only the ones labeled as custom fields could be manipulated for our use. Some fields are populated by file-embedded metadata and cannot be made to store other data types. More on custom fields later.
Final Cut Server, as a custom solution, requires custom configuration. I am compelled to offer an analogy here about custom car tuning. You can buy a stock Mini Cooper for under $20k USD and it will drive and ride just fine. If you want that car to perform really well under harsh driving conditions, you want it customized. It'll cost you another $10k, but it will perform better, look sharper and handle corners tighter. Installing the server only nets you a stock car. If you want that car to drive and look certain ways, you must customize. With that, I'm going to be rolling up my sleeves and getting my hands dirty. I'll write about it here for others to learn from my mistakes and successes.