|Small Tree has been working closely with Adobe to make sure our shared editing storage and networking products work reliably and smoothly with Adobe’s suite of content creation software.|
Since NAB 2013, we’ve worked closely with Adobe to improve interoperability and performance, and test new features to give our customers a better experience.
Most recently, I had the chance to test out Adobe Anywhere in our shop in Minnesota.
Adobe Anywhere is designed to let users edit content that might be stored in a high bandwidth codec, over a much slower connection link. Imagine having HD or 4K footage back at the ranch, while you’re in the field accessing the media via your LTE phone and a VPN connection.
The way it works is that there’s an Adobe Anywhere server sitting on your network that you connect to with Adobe Premiere and this server compresses and shrinks the data “on the fly” so it can be fed to your machine much like a YouTube video. Except you are scrubbing, editing, cutting, dubbing and all of the other things you might need to do during an edit session.
This real-time compression/transcoding happens because the Adobe Anywhere system is taking advantage of the amazing power of GPUs. Except rather than displaying the video to a screen, the video is being pushed into a network stream that’s fed to your client.
I tested my system out with some Pro Res promotional videos we’ve used at trade shows in the past, and did my editing over Wi-Fi.
What I found was that the system worked very well. I could see that the Adobe Anywhere system was reading the video from Small Tree’s shared storage at full rate, then pushing it to my system at a greatly reduced rate. I had no trouble playing, editing and managing the video over my Wi-Fi connection (although Adobe recommends 1Gb Ethernet as the minimum connectivity for clients today).
This type of architecture is very new and there are caveats. For example, if you are very far from the server system or running over a very slow link (like a vpn connection), latency can make certain actions take a very long time (like loading an entire project, or using Adobe’s Titler app which requires interactivity). Adobe cautions that latencies of 200msecs or more will lead to a very poor customer experience.
Additionally, just because the feed to the clients is much lower bandwidth (to accommodate slower links), the original video data still needs to be read in real-time at full speed. So there are no shortcuts there. You still need high quality, low latency storage to allow people to edit video from it. You just have a new tool to push that data via real-time proxies over longer and slower links.
All in all, I found the technology to be very smooth and it worked well with Small Tree’s shared network storage. I’m excited to see the reach of Small Tree shared storage extended out to a much larger group of potential users.
For a demonstration of Adobe Anywhere over Small Tree shared storage, visit us at the
NAB Show in Las Vegas this April (Booth SL11105).
|One day, when we're sitting in our rocking chairs recounting our past IT glories ("Why, when I was a young man, computers had ‘wires’”), we'll invariably start talking about our storage war stories. There will be so many. We'll talk of frisbee tossing stuck disks or putting bad drives in the freezer. We'll recount how we saved a company’s entire financial history by recovering an alternate superblock or fixing a byte swapping error on a tape with the "dd" command. I'm sure our children will be transfixed.|
No…no, they won't be transfixed, any more than we would be listening to someone telling us about how their grandpa's secret pot roast recipe starts with "Get a woodchuck...skin it." You simply have to be in an anthropological state of mind to listen to something like that. More likely, they walked into the room to ask you your wifi password (Of course, only us old folk will have wifi. Your kids are just visiting. At home they use something far more modern and futuristic. It'll probably be called iXifi or something).
Unfortunately for us, many of these war story issues remain serious problems today. Disks “do” get stuck and they “do” often get better and work for a while if you freeze them. It's a great way to get your data back when you've been a little lazy with backups.
Another problem is fragmentation. This is what I wanted to focus on today.
Disks today are still spinning platters with rings of "blocks" on them, where each block is typically 512 bytes. Ideally, as you write files to your disk, those bytes are written around the rings so you can read and write the blocks in sequence. The head doesn't have to move. Each new block spins underneath it.
Fragmentation occurs because we don't just leave files sitting on our disk forever. We delete them. We delete emails, log files, temp files, render files, and old projects we don't care about anymore. When we do this, those files leave "holes" in our filesystems. The OS wants to use these holes. (Indeed, SGI used to have a real-time filesystem that never left holes. All data was written at the end. I had to handle a few cases where people called asking why they never got their free space back when they deleted files. The answer was "we don't ever use old holes in the filesystem. That would slow us down!")
To use these holes, most operating systems use a "best fit" algorithm. They look at what you are trying to write, and try to find a hole where that write will fit. In this way, they can use old space. When you're writing something extremely large, the OS just sticks it into the free space at the end.
The problem occurs when you let things start to fill up. Now the OS can't always find a place to put your large writes. If it can't, it may have to break that large block of data into several smaller ones. A file that may have been written in one contiguous chunk may get broken into 11 or 12 pieces. This not only slows down your write performance, it will also slow down your reads when you go to read the file back.
To make matters worse, this file will remain fragmented even if you free more space up later. The OS does not go back and clean it up. So it's a good idea not to let your filesystems drop below 20% free space. If this happens and performance suffers, you're going to need to look into a defragmentation tool.
Soon, this issue won't matter to many of us. SSDs (Solid State Disks) fragment just like spinning disks, but it doesn't matter near as much. SSDs are more like Random Access Memory in that data blocks can be read in any order, equally as fast. So even though your OS might have to issue a few more reads to pull in a file (and there will be a slight performance hit), it won't be near as bad as what a spinning disk would experience. Hence, we'll tell our fragmentation war stories one day and get blank looks from our grandkids (What do you mean "spinning disk?" The disk was “moving??”).
Personally, I long for the days when disk drives were so large, they would vibrate the floor. I liked discovering that the night time tape drive operator was getting hand lotion on the reel to reel tape heads when she put the next backup tape on for the overnight runs. It was like CSI. I'm going to miss those days. Soon, everything will be like an iPhone and we'll just throw it away, get a new one, and sync it with the cloud. Man that sucks.
Follow Steve Modica and Small Tree on Twitter @smalltreecomm. Have a question? Contact Small Tree at 1-866-782-4622.
|Now that we’re several months removed from Apple’s introduction of Mavericks for OSX and we've all tested the waters a little, I wanted to talk about video editing software and how the various versions play with NAS storage like we use at Small Tree.|
Avid has long since released Media Composer 7, and from what I've seen, their AMA support (support for non-Avid shared storage), continues to improve. There are certainly complaints about the performance not matching native MXF workflows, but now that they've added read/write support, it's clear they are moving in a more NAS friendly direction. With some of the confusion going on in the edit system space, we're seeing more and more interested in MC 7.
Adobe has moved to their Creative Cloud model and I've noticed that it made it much easier to keep my system up to date. All of my test systems are either up to date, or telling me they need and update, so I can be fairly certainly I'm working with the latest release. That's really important when dealing with a product as large and integrated as the Adobe Suite of products. You certainly don't want to mix and match product revisions when trying to move data between After Effects and Premiere.
Another thing I've really grown to like about Adobe is their willingness to work with third party vendors (like Small Tree) to help correct problems that impact all of our customers. One great example is that Adobe worked around serious file size limitations present in Apple's QuickTime libraries. Basically, any time an application would attempt to generate a large QuickTime file (larger than 2GB), there was a chance the file would stop encoding at the 2GB mark. Adobe dived into the problem, understood it, and worked around it in their applications. This makes them one of the first to avoid this problem and certainly the most NAS friendly of all the video editing applications out there.
Lastly, I've seen some great things come out of FCP X in recent days. One workflow I'm very excited about involves using "Add SAN Location" (the built in support for SAN Volumes) and NFS (Network File Sharing). It turns out, if you mount your storage as NFS and create "Final Cut Projects" and "Final Cut Events" within project directories inside that volume, FCP X will let you "add" them as SAN locations. This lets you use very inexpensive NAS storage in lieu of a much more expensive Fibre Channel solution. For shops that find FCP X fits their workflow, they'll find that NFS NAS systems definitely fit their pocket books.
So as you move forward with your Mac platforms into Mavericks and beyond, consider taking a second look at your NLE (Non-Linear Editor) of choice. You may find that other workflow options are opening up.
|With the New Year festivities well behind us, today seems like as good a time as any to chat about where video editing storage is (or should be) headed in 2014. |
First, I’m really excited about FCoE. FCoE is great technology. It's built into our (Small Tree) cards, so we get super fast offloads. It uses the Fibre Channel protocol, so it's compatible with legacy Fibre Channel. You can buy one set of switches and do everything: Fibre Channel, 10Gb and FCoE (and even iSCSI if you want).
Are there any issues to be concerned about with FCoE? One problem is that the switches are too darn expensive! I've been waiting for someone to release an inexpensive switch and it just hasn't happened. Without that, I'm afraid the protocol will take a long time to come to market.
Second, I'm quite sure SSDs are the way of the future. I'm also quite sure SSDs will be cheaper and easier to fabricate than complex spinning disks. So why aren’t SSDs ubiquitous yet? Where are the 2 and 4 TB SSD drives that fit a 3.5" form factor? Why aren't we rapidly replacing our spinning disks with SSDs as they fail?
Unfortunately, we're constrained by the number of factories that can crank out the NAND flash chips. Even worse, there are so many things that need them, including smartphones, desktop devices, SATA disks, SAS disks, PCIE disks. With all of these things clawing at the market for chips, it's no wonder they are a little hard to come by. I'm not sure things will settle down until things "settle down" (i.e., a certain form factor becomes dominant).
Looking back at 2013, there were several key improvements that will have a positive impact on shared storage in 2014. One is Thunderbolt. Small Tree spent a lot of time updating its drivers to match the new spec. Once this work was done, we had some wonderful new features. Our cards can now seamlessly hotplug and unplug from a system. So customers can walk in, plug in, connect up and go. Similarly, when it’s time to go home, they unplug, drop their laptop in their backpack, and go home. I think this opens the door to allowing a lot more 10Gb Ethernet use among laptop and iMac users.
Apple’s new SMB implementation in 2013 was also critical for improvements in video editing workflow. Apple’s moving away from AFP as their primary form of sharing storage between Macs, and the upshot for us has been a much better SMB experience for our customers. It’s faster and friendlier to heterogeneous environments. I look forward to seeing more customers moving to an open SMB environment from a more restrictive (and harder to performance tune) AFP environment.
So as your editing team seeks to simplify its workflow to maximize its productivity in 2014, keep these new or improved technological enhancements in mind. If you have any questions about your shared storage solution, don’t hesitate to contact me at firstname.lastname@example.org.
|Back in 2003, on September 24th, I drove up to St Paul, Minnesota and filed paperwork to make Small Tree a Minnesota LLC. |
When we started, there were 6 of us and I'm not sure we knew exactly what our plan was. I knew we wanted to write kernel drivers for "high end" networking stuff (like Intel's brand new 10Gb Ethernet cards). I didn't think much beyond that. I figured if we built it "they would come" (as the saying goes). I also needed a good excuse to buy one of the new G5 Power Macs (and make it tax deductible).
Since then, Small Tree's written drivers for 8 different Intel Ethernet chips, LACP drivers (for 10.3, back before apple had their own), iSCSI, AoE and FCoE drivers. We've also done a lot of work on storage integration and kernel performance improvements.
I'm very proud of all the hard work all of the people at Small Tree have put in and of all the great things we've accomplished. It's been quite a roller coaster ride over the years. Looking back, I'd absolutely do it again. We've got some great people and some great customers and I still love working here, even after 10 years :)
|Pope Francis is out there saying lots of interesting things. Like him or not, he’s getting a lot of attention. He’s certainly changing the tone of the Catholic Church.|
That got me thinking. What’s changing the tone of computing these days? What needs to change, what’s in the middle of changing and what’s “gone forever?”
First, let’s talk about what needs to change.
The Application appliance model is here to stay on phones and tablets. Being an accomplished Android hack myself – I built some of the first Android kernels that could talk to military hardware – I understand the value of an open, flexible device. However, I don’t want to hand that to my dad. My dad wouldn’t get it. All of that value would be lost on him, and I’d be tasked with figuring out how to build him a one of a kind, customized Android phone that could email his pictures, sync with whatever Windows revision he’s running today, and let him connect to his Yahoo account. As simple as that sounds, I’d much rather get him an iPhone knowing the answer to any of these issues is a Google search away. His iPhone will perpetually be up to date and probably do what it’s supposed to without too much fuss.
So now we need this model for computing. As much as I want to handcraft my laptop so I can be sure I’m running the very latest OpenCL with a few cool kernel fixes I read about last week, I don’t really “need” to do that. That falls into the category of “hobbyist.” We don’t run out and tweak our cars so we can get to work faster. If we are tweaking our cars, it’s to scratch some itch that has nothing to do with getting to work.
The second thing that needs to change is that Microsoft needs to stop “Chasing the gauges.”
In the flight simulator world, “Chasing the gauges” is the act of looking at your gauges and trying to manipulate the plane to get your speed, altitude and angle where you want them. The problem is the gauges lag a little and so you invariably overcompensate. You tilt to the right until the horizon gauge shows level, but by then you are too far to the right, so you begin this back and forth oscillation and never quite settle down to level.
I’m no Windows fan. I’ve been doing Unix, Linux and Mac OS X for some 25 years. However, in all that time, I’ve had no choice but to map all my knowledge over to Windows. I had to learn how to setup networks, trace system calls, recover, reinstall, and capture crash dump information to solve problems. I had no choice. Windows is a fixture in the computing world and if you work with computers, you have to work with Windows.
What drives me crazy is that Microsoft keeps trying to “fix” all the things people complain about, while finding ways to make my background Window’s knowledge useless. They change the control panel, remove the start button, and allow vendors to replace well-known interfaces (like the control panel) with their own utilities. It’s as if Microsoft is reacting while still in the “backlash” cycle, so their changes simply start a new backlash and on we go, oscillating up and down hoping to get our plane level.
I think Microsoft needs to embrace a mechanism for consolidating all of their system administration “stuff” (ala Apple’s Preference Panel) and settle on it quickly so users of alternate platforms can easily adapt their knowledge to manage Windows too.
What’s in the middle of changing?
I think the move to solid-state storage is a no-brainer. It may seem like the drives are outrageously expensive, but consider that SSDs are mostly “printed” at large factories. NAND flash chips are a very well understood technology and the ability to print them is going to grow quickly over time. After all, these guys are operating at capacity. There’s little risk in building a new plant when you can’t meet existing demand.
Spinning disks may be cheaper today (and have larger capacity), but they are delicate, heavy and require lots of precision machining to build. They have supply chains requiring aluminum and precision machined parts as well as electronics. As we have seen, floods and storms in Southeast Asia can disrupt their supply. There will soon come a crossover point where SSDs are inexpensive enough and large enough that there’s simply no reason to continue trying to use spinning disks.
Another technology that is quickly becoming required rather than optional is a good GPU (graphics processing unit). In the olden days, the only people that cared about their GPU were high-end video/CAD customers and gamers. They’d get all slobbery about “polygons per second.” GPUs were largely one-way affairs. You could write to them very quickly (for rapid on-screen updates), but reading data back off of them was slow and expensive.
Today, this is no longer the case. PCIE and new programming interfaces like CUDA and OpenCL allow us to interact with GPUS as if they were mini-Cray Supercomputers. We can feed them complex matrix operations (whether it’s weather data, game data or Bitcoin hashes) and they spit back answers lickety-split. They are much faster than general purpose CPUs for this kind of activity. Having a powerful GPU (or more than one GPU) is rapidly becoming a requirement for any sort of powerful workstation.
What has changed?
The desktop computer appears to be dead. There are still computers that have a desktop form factor, but those computers are really workstations. They have a specific purpose and their users probably have a personal laptop they tote around with them when not using that workstation. For me, my work laptop and my personal laptop are largely the same thing. It’s just too inconvenient to try and manage two independent systems and keep things synched up.
Following this evolution to mobile computing, more and more of our stuff is online. Dropbox and Google Drive are seamless to use, but aren’t quite large enough or fast enough to store all our data yet, but over time, our network speeds are going to go up and the size and performance of cloud storage is going to explode. Our data requirements will probably plateau. After all, once you can video your entire day in 4K along with all your vital signs, what more data can you generate?
There will come a time when it’s just cheaper and easier to store all our stuff online. Rather than deleting it one day because our disks got full, the stuff will magically migrate off to older, slower storage back at some data center, to be recalled whenever future generations want to have a look.
|Visiting Australia is a once in a lifetime journey for many of us in North America. It's a 14 hour (very expensive) flight from LAX and because of the timezone shift, if you leave on Saturday, you arrive on Monday. You can forget Sunday ever existed. (Although on the way back, you arrive in LA before you even left Australia).|
I had the great pleasure of visiting Sydney Australia twice this year. First, I was able to fly down and do some sales training with Adimex and Digistor in March, and then last week to help Adimex and Digistor at their big SMPTE trade show at the Sydney Convention Center. I spoke with customers, gave a presentation every day about realtime storage, and demoed the Small Tree TitaniumZ Graphical User Interface for interested customers.
First, I have to say that Adimex and Digistor are great companies with a great bunch of professionals. They know their products and treat their customers well. Second, I think the show was a wonderful opportunity – for me personally and professionally. I didn't realize that such a large show took place in Australia. We had customers from China, New Zealand, Japan, Singapore and even Vanuatu (not to mention from all over Australia). People traveled a very long way to visit this show.
Customers were very interested in Small Tree's TitaniumZ product. I think the two most important features people asked about were ZFS expansion, which allows you to add storage without rebuilding your raids, and the new, integrated version of Mint from Flavoursys. Mint allows customers to do Avid project sharing and bin locking. So multiple Avid users can now seamlessly work together using Avid MC 6 or 7 on their Small Tree storage array. I definitely sensed a lot of interest in Avid now that MC 7 has released and Adobe has announced their shift to the cloud subscription model.
While I was running around at the conference answering questions, I also had the opportunity to do a lot of running in Sydney when I wasn’t at the trade show booth. I made sure to see the Sydney Opera House as well as the Botanical Gardens. Even in Winter, Sydney is one of the most beautiful cities you can visit anywhere in the world. If you get a chance to visit, make sure you take it!
If you’d like more information on what Small Tree was showing at the SMPTE conference – its TITANIUMZ line of products – visit us at www.Small-Tree.com.
|1. When you have lots of media coming in from various cameras to your shared storage, make sure you are ingesting that media using appropriate software.|
We have seen a few cases where people are dragging files in from the camera using the Finder, rather than the camera vendors import software.
When you do this, the media can sometimes have the "User Immutable" flag set. This flag prevents users from accidentally deleting files, even if they have appropriate permissions. You can see this flag via Right Click->get info. It's the "Locked" flag.
While this makes sense if the media is on the camera (where they expect you to do all deleting with the camera software interface), it does not make sense on your storage. However the flag is persistent and will also be set on any copies you make and any copies you make of those copies. It will also prevent the original files from being deleted when you "move" a large batch of material from one volume to another!
Obviously this will waste a lot of space and be very frustrating down the line when you have thousands of media files you can't delete. You'll also find that unsetting the Lock bit via "get info" is way to cumbersome for 10,000 files.
One simple answer is the command line. Apple has a command (as does FreeBSD) called "chflags". If you can handle using the "cd" command (Change Directory) to navigate to where all your Locked file are, you can run:
chflags -R nouchg *
This will iterate through all the directories and files (starting from whatever directory you're in) and clean off all the "Locked" bits.
2. Edit project files from your local machine, rather than shared storage.
There are a number of reasons to do this, and as time goes on, I seem to find more.
First, it's just safer. Not all apps lock project files. So it's possible that if you have enough editors all sharing the same space and everyone is very busy and the environment is hectic, someone could come along and open a project you already have open. If they "save" their copy after you save yours, your changes will be lost. It would be no different if it was a Word Document or Excel Spreadsheet. When multiple people save the same file, the last guy to save wipes out the first guy. (This is not a problem for shared media like clips and audio since those files are not being written, just pulled into projects).
Second, apps like Avid and FCP 7 all have foibles with saving to remote storage. Avid doesn't like to save "project settings" over Samba or AFP (although NFS and "Dave" from Thursby work fine). FCP seems to mess up its extended attributes when it saves, leading to "unknown file" errors and other strange behavior. (When this happens, you can easily fix it. See Small Tree Knowledge Base solution here: http://www.small-tree.com/kb_results.asp?ID=43).
Lastly, you may have different versions of apps on different machines. I recently had a customer that was using FCP 7.0 and attempting to open files written by FCP 7.0.3. The older app was unhappy with the newer format files and it created some strange error messages. While this would have been a problem no matter how the files were accessed (locally or over the network), the network share made it more confusing since it was not clear that the files came from another system. Had the user received the projects on a stick or via email, the incompatibility would have been much more obvious from the start.
If you have any questions regarding shared storage and improving your workflow, do not hesitate to contact me at email@example.com.
|As many of you know, we do a lot of military systems integration at Small Tree. When I'm not working on networking or storage performance, I get to play with little embedded things to make radio networks better/faster/smarter.|
So this week I was very happy to have a large order come in that needed 20 new units shipped out to the Army.
I ordered all the parts and started assembling these delicate little routers. Then I discovered, they didn't work. They "almost" worked. Data would sometimes flow in one direction, sometimes not at all. Sometimes the radios would be seen, sometimes not.
My error was in ordering the next "new" ARM cpu. It's less expensive and uses much less power (soldiers don't like to carry batteries). However this new cpu obviously doesn't work as well as the older one.
I'm sure it's some whacko timing issue. I'll have to dig into it. But for now, I had to overnight the old cpus so I could get this order out!
|Apple has always been on the leading edge of connectivity for their systems.|
Back in 2003, before we had formed Small Tree as a company, I can recall drooling over a Power Book laptop with an integrated Gigabit port. That was a crazy thing to have on a laptop at the time. Gigabit was still a little weird, very expensive, and not common as a drop at anyone’s desk. Yet here apple was putting it on a laptop.
Thunderbolt is a similarly aggressive move. It puts a great deal of IO horsepower on some very small systems.
Firstly, let’s consider what Thunderbolt is. Thunderbolt is a 4X (4 lane) PCIE 2.0 bus. It’s equal in performance (and protocol) to the top two slots of a traditional tower Mac Pro. Along with that 4X pipe, there’s a graphics output pipe for a monitor. These pipes are not shared! So using a daisy-chained monitor will not impinge on any attached IO devices.
Thunderbolt is capable of moving data at 10Gbits/sec FULL DUPLEX, meaning data can move in two directions at the same time, giving the pipe a total bandwidth of 20Gbits/sec.
As I read through the forums and opinion articles on Thunderbolt, one of the themes that pops up is “It’s Apple proprietary and expensive. Just use USB 3.0.” This is a reasonable point. USB 3.0 is capable of 4.8Gbits/sec (about half of the speed of Thunderbolt). Further, there are plans to speed up USB 3.0 to 10Gbits/sec to match Thunderbolt. So given these factors (and the low cost of most USB devices), it seems like an obvious choice.
However, there are some reasons that Thunderbolt may win the day for external high-speed connectivity (and relegate USB to it’s traditional low-end role).
First of all, most IO chips (Ethernet, SATA, SAS) are manufactured with a native PCIE backend. The chips are natively built to sit on a PCIE bus. So not only will you save the overhead of an additional protocol, the guys writing the code to support these devices only need to write one driver (PCIE). It just works whether the device is on a card or in a Thunderbolt toaster.
Another advantage of Thunderbolt is its power budget. Often, devices are powered by the port itself (very common with USB). USB can provide 4.5Watts of power to attached devices, whereas Thunderbolt offers a full 10Watts of power.
Lastly (and this is probably the most interesting thing about PCIE and Thunderbolt) is that Thunderbolt is a switched/negotiated protocol that is extremely flexible. Cards that want a 16X slot can work in a 4X slot. PCIE switches can (and do) exist to allow multiple machines to talk to one PCIE based device (like a RAID). So imagine a time in the future when devices can be connected to a “switch” in a back room and multiple systems can see them. Imagine those systems can have multiple connections to boost their bandwidth.
Thunderbolt may not be everywhere yet, but it’s really the first imaginings of a new way to handle IO outside of the “tower” type machines. I think it is easily the best choice for Mac users and will likely offer some amazing benefits in the next generation.