Creative COW SIGN IN :: SPONSORS :: ADVERTISING :: ABOUT US :: CONTACT US :: FAQ
Creative COW's LinkedIn GroupCreative COW's Facebook PageCreative COW on TwitterCreative COW's Google+ PageCreative COW on YouTube
BLOGS:My COW BlogAdobe BlogMacWorldEditingTechnologyAfter EffectsFinal CutEntertainment

networking

COW Blogs : Steve Modica's Blog : networking
Share on Facebook

My hopes for IBC this year and a great Labor Day offer from Small Tree

I’m heading out to IBC and there are a number of things I hope to see there. Of course, I’ve got customers asking me about SSDs, and engineers working on 40Gb Ethernet and people want to bring it all together. Really, what’s the hold up here?

My short wish list:

3.5” Server Chassis (8, 16, 24) with 12Gb SAS expanders onboard
2.5” Server Chassis (12 and 24) with 12Gb SAS expanders onboard
Balanced 40Gb switches that can legitimately aggregate 16 or 24 10Gb ports into 4 or 6 40Gb ports
4TB or larger SSDs that can handle enterprise workloads but cost less than $1000 per Terabyte
Thunderbolt 3 previews
40Gb Ethernet Adapters
8 or 10Terabyte 7200RPM SAS drives
New Wi-Fi technology that can run full duplex and offer backpressure and bandwidth reservation (can you imagine editing wirelessly?)

Obviously, I have a few of these technologies in hand already, but there are some major roadblocks to building a balanced server with them. SSDs are very expensive and still too small. We’ll need those 400MB/sec devices to justify putting 40Gb ports in a server.

For those of you that will be around next week after Labor Day, we are going to be running a special at Small Tree. If you purchase a TitaniumZ (8 or 16), we’re giving away two SANLink2 10Gb Ethernet Adapters. Using the two onboard 10Gb ports on the Titanium, you can immediately connect two clients and be editing over 10Gb. I think it’s a great back to school offer.

Contact Small Tree today to purchase your TitaniumZ system with two Promise SANLink2 10Gb Ethernet Adapters included - salesteam@small-tree.com or 866-782-4622. Purchase must be completed by 9/30/14.


Posted by: Steve Modica on Aug 26, 2014 at 9:04:53 am storage, networking
Reply   Like  

Data choke points and a cautionary tale

During a normal week, I help a lot of customers with performance issues. Some of the most common complaints I hear include:

“I bought a new 10Gb card so I could connect my Macs together, but when I drag files over, it doesn’t go any faster.”

“I upgraded the memory in my system because Final Cut was running slow, but it didn’t seem to help very much.”

“I bought a faster Mac so it would run my NLE more smoothly, but it actually seems worse than before.”

All of these things have something in common. Money was spent on performance, the users didn’t have a satisfying experience, and they would be much happier had the money been spent in the right place.

Of course, the first one is easy. Putting a 10Gb connection between two Macs and dragging files between them isn’t going to go any faster than the slowest disk involved. If one of those Macs is using an old SATA spinning disk, 40-60MB/sec would be a pretty normal transfer rate. A far cry from the 1000MB/sec you might expect from 10Gb Ethernet! Who wouldn’t be disappointed?

Similarly, the second case where a user upgrades memory based on an anecdotal suggestion of a friend is all too common. On the one hand, memory upgrades are typically a great way to go, especially when you run a lot of things simultaneously. More memory almost always means better performance. However, this is assuming that you didn’t have some other serious problem that was overwhelming your lack of memory.

In the case of Final Cut 7, which is a 32 bit application, more memory isn’t going to help Final Cut directly. In fact, it’s much more likely that Final Cut would run better with a faster disk and perhaps a faster CPU. Since FCP 7 didn’t use GPU offload, even moving to a better graphics card might not have delivered a huge gain.

The last one, where buying a faster Mac actually made things worse, is a classic case of mismatched performance tuning. For this customer, the faster Mac also had a lot more memory. It turns out that Mac OS X will dynamically increase the amount of data it will move across the network in a burst (the TCP Receive Window). This resulted in the network overrunning Final Cut, causing it to stutter. The solution? Dial back the receive window to make sure FCP 7 can keep up. This will be corrected by some other changes in the stack that are coming soon. One day, slower applications will be able to push back on the sender a little more directly and a little more effectively than today.

These cases bring to mind a discussion I had with a 40Gb Ethernet vendor back at NAB in April. They wanted me to use their cards and perhaps their switches. The obvious question: Don’t your users want the speed of 40Gb Ethernet? Wouldn’t they want to run this right to their desktops?!

Of course they would. Everyone wants to go fast. The problem is that those 40Gb ports are being fed by storage. If you look closely at what raid controllers and spinning disks can do, the best you can hope for from 16 drives and a raid card is around 1GB/sec. A 40Gb cards moves about 4GB/sec. So if I sold my customers 40Gb straight to their desktops, I would need somewhere around 64 spinning disks just to max out ONE 40Gb port. It could be done, but not economically. It would be more like a science project.

Even worse, on Macs today, those 40Gb ports would have to connect with Thunderbolt 2, which tops out around 2.5GB/sec and is yet another choke point that would lead to disappointed customers and wasted money.

I think 40Gb Ethernet has a place. In fact, we’re working on drivers today. However, that place will depend on much larger SSDs that can provide 1GB/sec per device. Once we’re moving 8 and 16GB/sec either via a RAID card or ZFS logical volumes, then it will make sense to put 40Gb everywhere. The added advantage is that waiting to deploy 40Gb will only lead to better and more stable 40Gb equipment. Anyone remember the old days of 10Gb back in 2003 when cards were expensive, super hot, and required single mode fiber?

Posted by: Steve Modica on Aug 11, 2014 at 12:47:59 pm networking, storage
Reply   Like  
+1

Up Coming User Group Meetings

Hi All

We've been developing some new software and hardware features for the TitaniumZ line and I'd like to come out and speak to your users group about them.

If you have a Final Cut, Adobe Premiere or Avid group and are interested in hearing about storage futures, Small Tree's new products, 40Gb Ethernet, or anything else I might know something about, let me know and I'll plan to come visit. Maybe I'll even give away a couple SANLink 10Gb Ethernet devices.

Steve Modica

Posted by: Steve Modica on Aug 11, 2014 at 9:09:39 am networking, storage
Reply   Like  

10Gb and SMB3 really rocks

We got RSS working on our 10Gb cards a few days ago. This is a feature that splits up data coming into the card into multiple queues. Then we can let different CPUS handle pulling in that data and passing it up the stack. We found what we figured we'd find: When we setup multiple streams, we see data in multiple queues. We see more cpus involved in the work, and things go a lot faster.

What surprised us was how great this made SAMBA. When we tested SMB3 with Yosemite, we were able to hit line rate (10Gb/sec) between two systems! This is due to SMB Multichannel. It's amazing. Soon, we should be able to extend this across adapters as well (we actually can, but not to the same share). This will let us do things like FC and iSCSI do today, but with a NAS. We'll be able to stripe bandwidth.

Steve

Posted by: Steve Modica on Jul 30, 2014 at 3:04:19 pm storage, networking
Reply   Like  
+1

5 Things You Need to Know about Shared Storage

1. Shared storage is becoming the norm. It's not a "hack" anymore that's used to skirt licenses or the need for more disks. Vendors are beginning to embrace it more and more, and the storage software and protocols are adapting. There's never been a better time to implement a shared storage solution.

2. 10Gb is being adopted very quickly. Small Tree has 10Gb ports built into its TitaniumZ systems and vendors are releasing inexpensive 10GbaseT Thunderbolt PODS now. So it's time to get up to speed with 4K codecs and start using 10Gb Ethernet.

3. Don't skimp on storage space. The storage you use for every day editing needs to be kept below 80% full to avoid fragmentation. Over-provision your editing space and plan on having some sort of archive space as well. Small Tree has TitaniumZ archive options that are very inexpensive that let you store twice as much stuff for half the price.

4. Small Tree's new TitaniumZ operating system (ZenOS 10) uses a balanced storage allocation strategy so your performance remains constant as the disk begins to fill up. So you get performance and efficiency across the entire array, which also helps to mitigate any fragmentation issues.

5. Shared NAS storage like Small Tree's is easy to setup and manage. You don't need meta-data servers, licenses, or expensive fibre channel networks. You just rack it up, plug it in and go!

Posted by: Steve Modica on Mar 26, 2014 at 7:13:21 amComments (3) storage, networking
Reply   Like  

Thunderbolt Updates

We’ve been working pretty hard on Thunderbolt products over the last few weeks and I thought I’d write up some of the interesting things we’ve implemented.

I’m sure most of you are aware that Thunderbolt is an external, hotplug/unplug version of PCIE. Thunderbolt 1 provided a 4X PCIE bus along with an equivalent bus for graphics only. Thunderbolt 2 allows you to trunk those two busses for 8X PCIE performance.

PCIE Pause

This is a new feature of Thunderbolt designed to deal with the uncertainty of what a user may plug in.

Normally, when a system boots up, all of the PCIE cards are in place. The system sorts out address space for each card and each driver is then able to map its hardware and initialize everything.

In the Thunderbolt world, we can never be sure what’s going to be there. At any time, a user could plug in not just one device, but maybe five! They could all be sitting on the users desk, daisy-chained, simply waiting for a single cable to install.

When this happens, the operating system needs the capability to reassign some of the address space and lanes so other devices can initialize and begin working.

This is where PCIE Pause comes into play. PCIE Pause allows the system to put Thunderbolt devices into a pseudo sleep mode (no driver activity) while bus address space is reassigned. Then devices are re-awakened and can restart operations. What’s important to note is that the hardware is not reset. So barring the odd timing issue causing a dropped frame, a PCIE Pause shouldn’t even reset a network mount on a Small Tree device.

Wake On Lan

We’ve been working hard on a Wake On Lan feature. This allows us to wake a machine from a sleep state in order to continue offering a service (like File sharing, ssh remote login or Screen sharing). This may be important for customers wanting to use a Mac Pro as a server via Thunderbolt RAID and Network devices.

The way it works is that you send a “magic” packet via a tool like “WakeonMac” from another system. This tells the port to power up the system far enough to start responding to services like AFP.

What’s interesting about the chip Small Tree uses (Intel x540) is that it requires power in order to watch for the “magic” wake up packet. Thunderbolt wants all power cut to the bus when the machine goes to sleep. So there’s a bit of a conflict here. Does a manufacturer violate the spec by continuing to power the device, or do they not support WOL?

This is most definitely true for the early Thunderbolt/PCIE card cage devices. They were all very careful to follow the Thunderbolt specification (required for certification and branding) and this leaves them missing this “powered while sleeping” capability.

Interested in learning more about how you could be using Thunderbolt? Contact me at
smodica@small-tree.com.

Posted by: Steve Modica on Mar 20, 2014 at 7:56:23 am networking, storage
Reply   Like  

NAB MATTERS MORE THAN EVER!!

1. The non-linear editing market (FCP, Avid etc) is changing rapidly. Avid was delisted, FCP supports NFS natively, Adobe is adding tons of new features (and a subscription model). More than ever, editors need to see what's out there and how people are using it.

2. Storage is changing rapidly. SSDs are becoming more and more common (and less and less pricy) and spinning disk vendors are consolidating.

3. Thunderbolt is here (and it appears that it's here to stay) and it offers new methods for connecting high bandwidth IO and video devices. Should you go big and buy a Mac Pro with 6 Tbolt ports? Or can you go small and buy an iMac with 2 Tbolt ports and just hot plug? Are the devices too loud to be in your edit suite? Now's the time to come and see.

4. There are many new cameras and codecs. They are have different methods of access to systems. It's good to hear from each storage and/or camera vendors how that will work.

5. New technology announcements. With all these changes coming, vendors are constantly looking for new ways to make better, faster and cheaper. Many of these revolutionary ideas are announced at NAB. I think it's helpful to be there and see “in person” the sort of reaction different products get.

6. Who's living and who's dying? Every vendor paints a happy face on their business and their products. It's always good to see that translated into booth traffic. It should be interesting to see which edit software vendors are getting visited this year.

Posted by: Steve Modica on Mar 18, 2014 at 11:03:40 am storage, networking
Reply   Like  
+1

Testing with Adobe Anywhere

Small Tree has been working closely with Adobe to make sure our shared editing storage and networking products work reliably and smoothly with Adobe’s suite of content creation software.
Since NAB 2013, we’ve worked closely with Adobe to improve interoperability and performance, and test new features to give our customers a better experience.

Most recently, I had the chance to test out Adobe Anywhere in our shop in Minnesota.

Adobe Anywhere is designed to let users edit content that might be stored in a high bandwidth codec, over a much slower connection link. Imagine having HD or 4K footage back at the ranch, while you’re in the field accessing the media via your LTE phone and a VPN connection.

The way it works is that there’s an Adobe Anywhere server sitting on your network that you connect to with Adobe Premiere and this server compresses and shrinks the data “on the fly” so it can be fed to your machine much like a YouTube video. Except you are scrubbing, editing, cutting, dubbing and all of the other things you might need to do during an edit session.

This real-time compression/transcoding happens because the Adobe Anywhere system is taking advantage of the amazing power of GPUs. Except rather than displaying the video to a screen, the video is being pushed into a network stream that’s fed to your client.

I tested my system out with some Pro Res promotional videos we’ve used at trade shows in the past, and did my editing over Wi-Fi.

What I found was that the system worked very well. I could see that the Adobe Anywhere system was reading the video from Small Tree’s shared storage at full rate, then pushing it to my system at a greatly reduced rate. I had no trouble playing, editing and managing the video over my Wi-Fi connection (although Adobe recommends 1Gb Ethernet as the minimum connectivity for clients today).

This type of architecture is very new and there are caveats. For example, if you are very far from the server system or running over a very slow link (like a vpn connection), latency can make certain actions take a very long time (like loading an entire project, or using Adobe’s Titler app which requires interactivity). Adobe cautions that latencies of 200msecs or more will lead to a very poor customer experience.

Additionally, just because the feed to the clients is much lower bandwidth (to accommodate slower links), the original video data still needs to be read in real-time at full speed. So there are no shortcuts there. You still need high quality, low latency storage to allow people to edit video from it. You just have a new tool to push that data via real-time proxies over longer and slower links.

All in all, I found the technology to be very smooth and it worked well with Small Tree’s shared network storage. I’m excited to see the reach of Small Tree shared storage extended out to a much larger group of potential users.

For a demonstration of Adobe Anywhere over Small Tree shared storage, visit us at the
NAB Show in Las Vegas this April (Booth SL11105).

Posted by: Steve Modica on Mar 6, 2014 at 8:35:22 am storage, networking
Reply   Like  

Another Couple of Reasons to Love SSDs

One day, when we're sitting in our rocking chairs recounting our past IT glories ("Why, when I was a young man, computers had ‘wires’”), we'll invariably start talking about our storage war stories. There will be so many. We'll talk of frisbee tossing stuck disks or putting bad drives in the freezer. We'll recount how we saved a company’s entire financial history by recovering an alternate superblock or fixing a byte swapping error on a tape with the "dd" command. I'm sure our children will be transfixed.

No…no, they won't be transfixed, any more than we would be listening to someone telling us about how their grandpa's secret pot roast recipe starts with "Get a woodchuck...skin it." You simply have to be in an anthropological state of mind to listen to something like that. More likely, they walked into the room to ask you your wifi password (Of course, only us old folk will have wifi. Your kids are just visiting. At home they use something far more modern and futuristic. It'll probably be called iXifi or something).

Unfortunately for us, many of these war story issues remain serious problems today. Disks “do” get stuck and they “do” often get better and work for a while if you freeze them. It's a great way to get your data back when you've been a little lazy with backups.

Another problem is fragmentation. This is what I wanted to focus on today.

Disks today are still spinning platters with rings of "blocks" on them, where each block is typically 512 bytes. Ideally, as you write files to your disk, those bytes are written around the rings so you can read and write the blocks in sequence. The head doesn't have to move. Each new block spins underneath it.

Fragmentation occurs because we don't just leave files sitting on our disk forever. We delete them. We delete emails, log files, temp files, render files, and old projects we don't care about anymore. When we do this, those files leave "holes" in our filesystems. The OS wants to use these holes. (Indeed, SGI used to have a real-time filesystem that never left holes. All data was written at the end. I had to handle a few cases where people called asking why they never got their free space back when they deleted files. The answer was "we don't ever use old holes in the filesystem. That would slow us down!")

To use these holes, most operating systems use a "best fit" algorithm. They look at what you are trying to write, and try to find a hole where that write will fit. In this way, they can use old space. When you're writing something extremely large, the OS just sticks it into the free space at the end.

The problem occurs when you let things start to fill up. Now the OS can't always find a place to put your large writes. If it can't, it may have to break that large block of data into several smaller ones. A file that may have been written in one contiguous chunk may get broken into 11 or 12 pieces. This not only slows down your write performance, it will also slow down your reads when you go to read the file back.

To make matters worse, this file will remain fragmented even if you free more space up later. The OS does not go back and clean it up. So it's a good idea not to let your filesystems drop below 20% free space. If this happens and performance suffers, you're going to need to look into a defragmentation tool.

Soon, this issue won't matter to many of us. SSDs (Solid State Disks) fragment just like spinning disks, but it doesn't matter near as much. SSDs are more like Random Access Memory in that data blocks can be read in any order, equally as fast. So even though your OS might have to issue a few more reads to pull in a file (and there will be a slight performance hit), it won't be near as bad as what a spinning disk would experience. Hence, we'll tell our fragmentation war stories one day and get blank looks from our grandkids (What do you mean "spinning disk?" The disk was “moving??”).

Personally, I long for the days when disk drives were so large, they would vibrate the floor. I liked discovering that the night time tape drive operator was getting hand lotion on the reel to reel tape heads when she put the next backup tape on for the overnight runs. It was like CSI. I'm going to miss those days. Soon, everything will be like an iPhone and we'll just throw it away, get a new one, and sync it with the cloud. Man that sucks.

Follow Steve Modica and Small Tree on Twitter @smalltreecomm. Have a question? Contact Small Tree at 1-866-782-4622.

Posted by: Steve Modica on Feb 25, 2014 at 12:08:49 pm storage, networking
Reply   Like  

What’s Your NLE of Choice

Now that we’re several months removed from Apple’s introduction of Mavericks for OSX and we've all tested the waters a little, I wanted to talk about video editing software and how the various versions play with NAS storage like we use at Small Tree.

Avid has long since released Media Composer 7, and from what I've seen, their AMA support (support for non-Avid shared storage), continues to improve. There are certainly complaints about the performance not matching native MXF workflows, but now that they've added read/write support, it's clear they are moving in a more NAS friendly direction. With some of the confusion going on in the edit system space, we're seeing more and more interested in MC 7.

Adobe has moved to their Creative Cloud model and I've noticed that it made it much easier to keep my system up to date. All of my test systems are either up to date, or telling me they need and update, so I can be fairly certainly I'm working with the latest release. That's really important when dealing with a product as large and integrated as the Adobe Suite of products. You certainly don't want to mix and match product revisions when trying to move data between After Effects and Premiere.

Another thing I've really grown to like about Adobe is their willingness to work with third party vendors (like Small Tree) to help correct problems that impact all of our customers. One great example is that Adobe worked around serious file size limitations present in Apple's QuickTime libraries. Basically, any time an application would attempt to generate a large QuickTime file (larger than 2GB), there was a chance the file would stop encoding at the 2GB mark. Adobe dived into the problem, understood it, and worked around it in their applications. This makes them one of the first to avoid this problem and certainly the most NAS friendly of all the video editing applications out there.

Lastly, I've seen some great things come out of FCP X in recent days. One workflow I'm very excited about involves using "Add SAN Location" (the built in support for SAN Volumes) and NFS (Network File Sharing). It turns out, if you mount your storage as NFS and create "Final Cut Projects" and "Final Cut Events" within project directories inside that volume, FCP X will let you "add" them as SAN locations. This lets you use very inexpensive NAS storage in lieu of a much more expensive Fibre Channel solution. For shops that find FCP X fits their workflow, they'll find that NFS NAS systems definitely fit their pocket books.

So as you move forward with your Mac platforms into Mavericks and beyond, consider taking a second look at your NLE (Non-Linear Editor) of choice. You may find that other workflow options are opening up.

Posted by: Steve Modica on Feb 2, 2014 at 7:20:35 am networking, storage
Reply   Like  

What you need to know about video editing storage in 2014

With the New Year festivities well behind us, today seems like as good a time as any to chat about where video editing storage is (or should be) headed in 2014.

First, I’m really excited about FCoE. FCoE is great technology. It's built into our (Small Tree) cards, so we get super fast offloads. It uses the Fibre Channel protocol, so it's compatible with legacy Fibre Channel. You can buy one set of switches and do everything: Fibre Channel, 10Gb and FCoE (and even iSCSI if you want).

Are there any issues to be concerned about with FCoE? One problem is that the switches are too darn expensive! I've been waiting for someone to release an inexpensive switch and it just hasn't happened. Without that, I'm afraid the protocol will take a long time to come to market.

Second, I'm quite sure SSDs are the way of the future. I'm also quite sure SSDs will be cheaper and easier to fabricate than complex spinning disks. So why aren’t SSDs ubiquitous yet? Where are the 2 and 4 TB SSD drives that fit a 3.5" form factor? Why aren't we rapidly replacing our spinning disks with SSDs as they fail?

Unfortunately, we're constrained by the number of factories that can crank out the NAND flash chips. Even worse, there are so many things that need them, including smartphones, desktop devices, SATA disks, SAS disks, PCIE disks. With all of these things clawing at the market for chips, it's no wonder they are a little hard to come by. I'm not sure things will settle down until things "settle down" (i.e., a certain form factor becomes dominant).

Looking back at 2013, there were several key improvements that will have a positive impact on shared storage in 2014. One is Thunderbolt. Small Tree spent a lot of time updating its drivers to match the new spec. Once this work was done, we had some wonderful new features. Our cards can now seamlessly hotplug and unplug from a system. So customers can walk in, plug in, connect up and go. Similarly, when it’s time to go home, they unplug, drop their laptop in their backpack, and go home. I think this opens the door to allowing a lot more 10Gb Ethernet use among laptop and iMac users.

Apple’s new SMB implementation in 2013 was also critical for improvements in video editing workflow. Apple’s moving away from AFP as their primary form of sharing storage between Macs, and the upshot for us has been a much better SMB experience for our customers. It’s faster and friendlier to heterogeneous environments. I look forward to seeing more customers moving to an open SMB environment from a more restrictive (and harder to performance tune) AFP environment.

So as your editing team seeks to simplify its workflow to maximize its productivity in 2014, keep these new or improved technological enhancements in mind. If you have any questions about your shared storage solution, don’t hesitate to contact me at smodica@small-tree.com.

Posted by: Steve Modica on Jan 17, 2014 at 10:05:58 am storage, networking
Reply   Like  
+1

Small Tree is now 10years old!

Back in 2003, on September 24th, I drove up to St Paul, Minnesota and filed paperwork to make Small Tree a Minnesota LLC.

When we started, there were 6 of us and I'm not sure we knew exactly what our plan was. I knew we wanted to write kernel drivers for "high end" networking stuff (like Intel's brand new 10Gb Ethernet cards). I didn't think much beyond that. I figured if we built it "they would come" (as the saying goes). I also needed a good excuse to buy one of the new G5 Power Macs (and make it tax deductible).

Since then, Small Tree's written drivers for 8 different Intel Ethernet chips, LACP drivers (for 10.3, back before apple had their own), iSCSI, AoE and FCoE drivers. We've also done a lot of work on storage integration and kernel performance improvements.

I'm very proud of all the hard work all of the people at Small Tree have put in and of all the great things we've accomplished. It's been quite a roller coaster ride over the years. Looking back, I'd absolutely do it again. We've got some great people and some great customers and I still love working here, even after 10 years :)

Posted by: Steve Modica on Sep 24, 2013 at 11:08:41 am networking, storage
Reply   Like  

That New Pope is Something huh?

Pope Francis is out there saying lots of interesting things. Like him or not, he’s getting a lot of attention. He’s certainly changing the tone of the Catholic Church.

That got me thinking. What’s changing the tone of computing these days? What needs to change, what’s in the middle of changing and what’s “gone forever?”

First, let’s talk about what needs to change.

The Application appliance model is here to stay on phones and tablets. Being an accomplished Android hack myself – I built some of the first Android kernels that could talk to military hardware – I understand the value of an open, flexible device. However, I don’t want to hand that to my dad. My dad wouldn’t get it. All of that value would be lost on him, and I’d be tasked with figuring out how to build him a one of a kind, customized Android phone that could email his pictures, sync with whatever Windows revision he’s running today, and let him connect to his Yahoo account. As simple as that sounds, I’d much rather get him an iPhone knowing the answer to any of these issues is a Google search away. His iPhone will perpetually be up to date and probably do what it’s supposed to without too much fuss.

So now we need this model for computing. As much as I want to handcraft my laptop so I can be sure I’m running the very latest OpenCL with a few cool kernel fixes I read about last week, I don’t really “need” to do that. That falls into the category of “hobbyist.” We don’t run out and tweak our cars so we can get to work faster. If we are tweaking our cars, it’s to scratch some itch that has nothing to do with getting to work.

The second thing that needs to change is that Microsoft needs to stop “Chasing the gauges.”

In the flight simulator world, “Chasing the gauges” is the act of looking at your gauges and trying to manipulate the plane to get your speed, altitude and angle where you want them. The problem is the gauges lag a little and so you invariably overcompensate. You tilt to the right until the horizon gauge shows level, but by then you are too far to the right, so you begin this back and forth oscillation and never quite settle down to level.

I’m no Windows fan. I’ve been doing Unix, Linux and Mac OS X for some 25 years. However, in all that time, I’ve had no choice but to map all my knowledge over to Windows. I had to learn how to setup networks, trace system calls, recover, reinstall, and capture crash dump information to solve problems. I had no choice. Windows is a fixture in the computing world and if you work with computers, you have to work with Windows.

What drives me crazy is that Microsoft keeps trying to “fix” all the things people complain about, while finding ways to make my background Window’s knowledge useless. They change the control panel, remove the start button, and allow vendors to replace well-known interfaces (like the control panel) with their own utilities. It’s as if Microsoft is reacting while still in the “backlash” cycle, so their changes simply start a new backlash and on we go, oscillating up and down hoping to get our plane level.

I think Microsoft needs to embrace a mechanism for consolidating all of their system administration “stuff” (ala Apple’s Preference Panel) and settle on it quickly so users of alternate platforms can easily adapt their knowledge to manage Windows too.

What’s in the middle of changing?

I think the move to solid-state storage is a no-brainer. It may seem like the drives are outrageously expensive, but consider that SSDs are mostly “printed” at large factories. NAND flash chips are a very well understood technology and the ability to print them is going to grow quickly over time. After all, these guys are operating at capacity. There’s little risk in building a new plant when you can’t meet existing demand.

Spinning disks may be cheaper today (and have larger capacity), but they are delicate, heavy and require lots of precision machining to build. They have supply chains requiring aluminum and precision machined parts as well as electronics. As we have seen, floods and storms in Southeast Asia can disrupt their supply. There will soon come a crossover point where SSDs are inexpensive enough and large enough that there’s simply no reason to continue trying to use spinning disks.

Another technology that is quickly becoming required rather than optional is a good GPU (graphics processing unit). In the olden days, the only people that cared about their GPU were high-end video/CAD customers and gamers. They’d get all slobbery about “polygons per second.” GPUs were largely one-way affairs. You could write to them very quickly (for rapid on-screen updates), but reading data back off of them was slow and expensive.

Today, this is no longer the case. PCIE and new programming interfaces like CUDA and OpenCL allow us to interact with GPUS as if they were mini-Cray Supercomputers. We can feed them complex matrix operations (whether it’s weather data, game data or Bitcoin hashes) and they spit back answers lickety-split. They are much faster than general purpose CPUs for this kind of activity. Having a powerful GPU (or more than one GPU) is rapidly becoming a requirement for any sort of powerful workstation.

What has changed?

The desktop computer appears to be dead. There are still computers that have a desktop form factor, but those computers are really workstations. They have a specific purpose and their users probably have a personal laptop they tote around with them when not using that workstation. For me, my work laptop and my personal laptop are largely the same thing. It’s just too inconvenient to try and manage two independent systems and keep things synched up.

Following this evolution to mobile computing, more and more of our stuff is online. Dropbox and Google Drive are seamless to use, but aren’t quite large enough or fast enough to store all our data yet, but over time, our network speeds are going to go up and the size and performance of cloud storage is going to explode. Our data requirements will probably plateau. After all, once you can video your entire day in 4K along with all your vital signs, what more data can you generate?

There will come a time when it’s just cheaper and easier to store all our stuff online. Rather than deleting it one day because our disks got full, the stuff will magically migrate off to older, slower storage back at some data center, to be recalled whenever future generations want to have a look.

Posted by: Steve Modica on Aug 7, 2013 at 9:14:30 am networking, computing
Reply   Like  

SMPTE Australia 2013 visit

Visiting Australia is a once in a lifetime journey for many of us in North America. It's a 14 hour (very expensive) flight from LAX and because of the timezone shift, if you leave on Saturday, you arrive on Monday. You can forget Sunday ever existed. (Although on the way back, you arrive in LA before you even left Australia).

I had the great pleasure of visiting Sydney Australia twice this year. First, I was able to fly down and do some sales training with Adimex and Digistor in March, and then last week to help Adimex and Digistor at their big SMPTE trade show at the Sydney Convention Center. I spoke with customers, gave a presentation every day about realtime storage, and demoed the Small Tree TitaniumZ Graphical User Interface for interested customers.

First, I have to say that Adimex and Digistor are great companies with a great bunch of professionals. They know their products and treat their customers well. Second, I think the show was a wonderful opportunity – for me personally and professionally. I didn't realize that such a large show took place in Australia. We had customers from China, New Zealand, Japan, Singapore and even Vanuatu (not to mention from all over Australia). People traveled a very long way to visit this show.

Customers were very interested in Small Tree's TitaniumZ product. I think the two most important features people asked about were ZFS expansion, which allows you to add storage without rebuilding your raids, and the new, integrated version of Mint from Flavoursys. Mint allows customers to do Avid project sharing and bin locking. So multiple Avid users can now seamlessly work together using Avid MC 6 or 7 on their Small Tree storage array. I definitely sensed a lot of interest in Avid now that MC 7 has released and Adobe has announced their shift to the cloud subscription model.

While I was running around at the conference answering questions, I also had the opportunity to do a lot of running in Sydney when I wasn’t at the trade show booth. I made sure to see the Sydney Opera House as well as the Botanical Gardens. Even in Winter, Sydney is one of the most beautiful cities you can visit anywhere in the world. If you get a chance to visit, make sure you take it!

If you’d like more information on what Small Tree was showing at the SMPTE conference – its TITANIUMZ line of products – visit us at www.Small-Tree.com.




Posted by: Steve Modica on Jul 29, 2013 at 12:43:19 pm networking, storage
Reply   Like  

Tips for Happy Shared Storage Workflows

1. When you have lots of media coming in from various cameras to your shared storage, make sure you are ingesting that media using appropriate software.

We have seen a few cases where people are dragging files in from the camera using the Finder, rather than the camera vendors import software.
When you do this, the media can sometimes have the "User Immutable" flag set. This flag prevents users from accidentally deleting files, even if they have appropriate permissions. You can see this flag via Right Click->get info. It's the "Locked" flag.

While this makes sense if the media is on the camera (where they expect you to do all deleting with the camera software interface), it does not make sense on your storage. However the flag is persistent and will also be set on any copies you make and any copies you make of those copies. It will also prevent the original files from being deleted when you "move" a large batch of material from one volume to another!

Obviously this will waste a lot of space and be very frustrating down the line when you have thousands of media files you can't delete. You'll also find that unsetting the Lock bit via "get info" is way to cumbersome for 10,000 files.

One simple answer is the command line. Apple has a command (as does FreeBSD) called "chflags". If you can handle using the "cd" command (Change Directory) to navigate to where all your Locked file are, you can run:

chflags -R nouchg *

This will iterate through all the directories and files (starting from whatever directory you're in) and clean off all the "Locked" bits.

2. Edit project files from your local machine, rather than shared storage.

There are a number of reasons to do this, and as time goes on, I seem to find more.

First, it's just safer. Not all apps lock project files. So it's possible that if you have enough editors all sharing the same space and everyone is very busy and the environment is hectic, someone could come along and open a project you already have open. If they "save" their copy after you save yours, your changes will be lost. It would be no different if it was a Word Document or Excel Spreadsheet. When multiple people save the same file, the last guy to save wipes out the first guy. (This is not a problem for shared media like clips and audio since those files are not being written, just pulled into projects).

Second, apps like Avid and FCP 7 all have foibles with saving to remote storage. Avid doesn't like to save "project settings" over Samba or AFP (although NFS and "Dave" from Thursby work fine). FCP seems to mess up its extended attributes when it saves, leading to "unknown file" errors and other strange behavior. (When this happens, you can easily fix it. See Small Tree Knowledge Base solution here: http://www.small-tree.com/kb_results.asp?ID=43).

Lastly, you may have different versions of apps on different machines. I recently had a customer that was using FCP 7.0 and attempting to open files written by FCP 7.0.3. The older app was unhappy with the newer format files and it created some strange error messages. While this would have been a problem no matter how the files were accessed (locally or over the network), the network share made it more confusing since it was not clear that the files came from another system. Had the user received the projects on a stick or via email, the incompatibility would have been much more obvious from the start.

If you have any questions regarding shared storage and improving your workflow, do not hesitate to contact me at modica@small-tree.com.

Posted by: Steve Modica on Jun 24, 2013 at 12:45:15 pm storage, networking
Reply   Like  
+1

It's the Litte Things

As many of you know, we do a lot of military systems integration at Small Tree. When I'm not working on networking or storage performance, I get to play with little embedded things to make radio networks better/faster/smarter.

So this week I was very happy to have a large order come in that needed 20 new units shipped out to the Army.

I ordered all the parts and started assembling these delicate little routers. Then I discovered, they didn't work. They "almost" worked. Data would sometimes flow in one direction, sometimes not at all. Sometimes the radios would be seen, sometimes not.

My error was in ordering the next "new" ARM cpu. It's less expensive and uses much less power (soldiers don't like to carry batteries). However this new cpu obviously doesn't work as well as the older one.

I'm sure it's some whacko timing issue. I'll have to dig into it. But for now, I had to overnight the old cpus so I could get this order out!

Posted by: Steve Modica on Jun 21, 2013 at 2:59:36 pm networking
Reply   Like  

It's been 40years for Ethernet and we owe it a huge thanks

Excerpted from Bob Metcalfe's Reddit post today:

On May 22, 1973 with David R. Boggs, I (Bob Metcalfe) used my IBM Selectric with its Orator ball to type up a memo to my bosses at the Xerox Palo Alto Research Center (PARC), outlining our idea for this little invention called “Ethernet”, which we later patented.
(end of excerpt)

I've made my living via Ethernet since the mid 80s when I crawled around my office installing coaxial cabling and Ethernet terminators for my Novell Netware Network. (NE1000 cards anyone?)

Things have improved a lot since then. We all owe Bob and Dave a huge debt.

Steve

Posted by: Steve Modica on May 21, 2013 at 2:52:53 pm networking
Reply   Like  

NAB Sneak Preview from Small Tree

I’m getting ready to head out to Las Vegas this weekend to get the Small Tree booth all setup (SL6005) and I’m really excited.

First off, we have a brand new version of our Titanium platform coming out called “Titanium Z”. The Z platform is AWESOME and the folks here at Small Tree (including The Duffy) are very excited to start telling people about it.

First of all, in keeping with our history of bringing really high-tech functionality (like real time video editing) down into the commodity price space, we are now bringing down Storage Virtualization.

To offer Virtualization, we had to migrate Titanium to a new OS based on FreeBSD. In doing this, we were able to pull in ZFS technology. This gives us the ability to stripe RAID sets together, migrate data around, and add new RAID sets to existing volumes without rebuilding.

We’ve also updated all the hardware, increased performance 25% and kept our same great low price model. You get more for your money.

The Titanium 4 has also been extensively improved based on customer feedback. ZFS performance is so good, we ditched the need for a RAID controller in the new T5. At the same time, we added a 5th drive (more storage, more performance) and allowed for the addition of a dual port 10GbaseT card. So now, not only is the device mobile, fast and inexpensive, it also supports direct attaching with 10Gb Ethernet! You can bring along one of our ThunderNET boxes on your shoot and have your laptop editing over 10Gb Ethernet right out in the field.

Lastly, I’ve had tons of people bugging me about SSDs and 10Gb. I demoed a super fast box at the Atlanta Cutters called “Titanium Extreme” and we showed off real time video playing to my laptop (over Dual 10Gb ports) going 1.2GBytes/sec. (not a benchmark. Real video). We’ll have this guy along as well.

So if you want to stop by and visit us and see all this cool stuff, swing down the South Lower (6005). You can’t miss us. We will have a giant round screen hanging above us with all sorts of amazing stuff flying by put together by Walter Biscardi. We’d love to see you.

Posted by: Steve Modica on Apr 5, 2013 at 10:33:47 amComments (4) networking, storage
Reply   Like  

Gigabytes per second or Giga-buts per second?

Every year as NAB approaches, the marketing once again begins. Oh the marketing....

As NAB approaches, I'd like to take a moment to remind people in the market for storage that Gigabytes/second is not what makes video play smoothly.
Vendors with no Computer Engineers on staff will pull together monstrous conglomerations of SSDs and RAID cards, run a few benchmarks (probably four or five different ones until they find one they like) and then claim they've hit some huge number of Gigabytes per second.

Small Tree has been supporting Server based video editing longer than anyone in the market. We were supporting Avid when they used SGI's 10 years ago (and they were SGI's largest customer). We know how things work. We helped develop them.

Playing video requires a RAID configuration that can handle multiple, clocked streams. Benchmarks on the other hand, tend to use a single stream, reading sequentially as fast as they can.

What's the difference you ask? Well, in the sequential case, the RAID controller gets to use lots of tricks to avoid the hard work of seeking around disks and reordering commands. The next block to be read is probably the next block, so things like "read ahead" work wonderfully. Don't just read the next 128k, read the next 1MB! It'll all be read next anyhow. It makes it very easy for sequential benchmarks to look good. In the Supercomputing world, meaningless TeraFLOP marketing numbers were referred to as "MachoFLOPS". We knew they meant nothing when vendors could spin assembly instructions in a tight loop and claim 1.5PetaFLOPS.

Small Tree's testing and development involves looking carefully at how the Video Editing Programs themselves read so we can carefully mimic that traffic during testing. This lets us be sure our equipment doesn't rely on sequential tricks to deliver real, multi-stream performance.

So when you walk up to a vendor at NAB and they start telling you about their MachoGigabytes per second, make sure you ask them about their sustained latency numbers. Small Tree knows all about latency and we back it up, every day with our products.

Posted by: Steve Modica on Mar 24, 2013 at 1:57:50 pm networking, storage
Reply   Like  

Biscardi Creative Upgrade!

Very recently, Small Tree had the opportunity to go down to Atlanta and visit Walter Biscardi and upgrade his data center and edit suites. In conjunction with this trip, we also did a presentation on the upgrade for the Atlanta Cutters and showed off a new SSD based Titanium shared storage system we put together. This new Titanium SSD was able to move 1.2GB/sec of *realtime* video to Adobe Premiere with no dropped frames. This is faster than you can go with 8Gb Fibre Channel and the fastest realtime video I've ever seen displayed live without a net!

The upgrade involved pulling out Walter's existing SFP+ 10Gb switch, which had a mix of Gigabit SFP modules for his suites and 10Gb SFP+ modules for his server, and replacing it with a 10GbaseT switch from Small Tree that had 4 SFP+ ports (for the server) and 24 10GbaseT ports for the new Titanium and some of his edit suites.

Before we dived right into putting in the new switch and adding the Titanium 8, we spent a lot of time talking about power. Walter didn't want to spend $1000 for an expensive UPS, but he wanted a good UPS that could handle the new load and not break the bank. We settled on an Ultra Xfinity that offered 1200W of load. This allowed for plenty of overhead for the 660W titanium and kept the loading on the UPS to well under the recommended 80%.

After installing the new switch, we moved all the cables over. One of the wonderful aspects of 10GbaseT is that we didn't have to do anything special when replacing ports that used to be Gigabit. 10GbaseT clocks down to Gigabit and even 100Mbit. So there was no trouble with legacy equipment or special adapters.

Once the switch was in, we turned to the Titanium 8. We installed it and plugged it into its new UPS and cabled it into the switch. We bonded the two 10GbaseT ports coming from the Titanium so it would load balance all the incoming clients.

Once that was done, it was time to upgrade some of the more important edit suites to 10GbaseT. What good is having all that 10Gb goodness in the lab when you can't feel the power all the way to the desktop? We upgraded both of Walter's iMac systems to 10Gb (via ThunderNET boxes) and added another 10Gb card to his fastest Mac Pro in Suite 1.

The result was a cool 300MB/sec writing from his iMac and 600MB/sec reading using the Aja System test. As I tell people, this isn't the best way to measure NAS bandwidth because applications like Final Cut and Adobe use different APIs to read their media files.

With the NAB Show approaching, I hope many of you that are planning to attend will be able to swing by Small Tree’s booth (SL6005) to learn more about this recent install directly from Walter, as he’ll be on-hand. While you’re there, feel free to ask about the SSD based Titanium shared storage solution we’re “going plaid” with.

If you’d rather not wait until NAB to learn more, contact me at modica at small-tree.com

Posted by: Steve Modica on Mar 4, 2013 at 4:19:18 pm networking, storage
Reply   Like  

The Power of Ethernet

Storage is a tough market and customers are always willing to pay a little less to get a little less. My take away is this: In the war between Ethernet and EtherNOT based storage, such as Fibre Channel, the one that delivers the best value for the lowest price is going to win. As Warren Buffet likes to say, "In the short term, the market is a popularity contest. In the long term, it's a weighing machine." People need to buy based on value over time.

Fibre Channel has been hamstrung for a long time by its need for custom ASICs (chips used to implement the protocol in hardware). Fibre Channel wanted to overcome all of the limitations of Ethernet and so they invented a protocol that did just that. The problem of course is that those custom ASICS are not on motherboards. You don't get FC chips built into your DELL server (unless you order a special card or riser). You don't see Apple putting FC chips on Mac Pros (even tho they sold Xsan and XRAID for so long).

What's the result? Expensive chips. It's expensive to fab them and expensive to fix them. FC stuff is expensive. Vendors may find ways to lower the entry point, but somewhere or other, either via support, licensing or upgrades, the cost will be expensive.

Ethernet certainly has ASICS as well. There are network processors, MAC (media access control chips) and PHY chips (the chips that implement the physical layer). They can be incredibly expensive. The first 10Gb cards Small Tree sold were $4770 list price! But here's the thing...a 10Gb card today is $1000 or less. The chips are everywhere and they are rapidly going onto motherboards. Ethernet is truly ubiquitous and will continue to be for server and storage technologies.

If you'd like to discuss or debate Ethernet vs EtherNOT, send me an email at info@small-tree.com or hit me up on Twitter @svmodica.

Posted by: Steve Modica on Jan 29, 2013 at 12:18:35 pmComments (4) storage, networking
Reply   Like  

Another step in the Commodity Hardware Revolution

Not too long ago, I was asked to write up my predictions on storage and networking technology for the coming year. One of those predictions was the rise of new, combined file system/logical volume managers like ZFS and BtrFS.

These file systems don’t rely on RAID cards to handle things like parity calculations. They also don’t “hide” the underlying drives from the operating system. The entire IO subsystem - drives and controllers - is available to the operating system and data is laid out across the devices as necessary for best performance.

As we’ve begun experimenting ourselves with these technologies, we’ve seen a lot of very promising results.

First and foremost, I think it’s important to note that Small Tree engineers mostly came from SGI and Cray. While working there, most of our time in support was spent “tuning.” People wouldn’t buy SGIs or Crays simply to run a file server. Invariably, they were doing something new and different like simulating a jet fighter or rendering huge 3D databases to a screen in real-time. There would always be some little tweak required to the OS to make it all work smoothly. Maybe they didn’t have enough disk buffers or disk buffer headers. Maybe they couldn’t create enough shared memory segments.

Small Tree (www.small-tree.com) has always brought this same skill set down to commodity hardware like SATA drives and RAID controllers, Ethernet networks and Intel CPUS. These days, all of this stuff has the capability to handle shared video editing, but quite often the systems aren’t tuned to support it.

I think ZFS is the next big step in moving very high-end distributed storage down into the commodity space.

Consider this: A typical RAID card is really an ASIC (Application Specific Integrated Circuit). Essentially, some really smart engineering guys write hardware code (Verilog, VHDL) and create a chip that someone “prints” for them. SGI had to do this with their special IO chips and HUB chips to build huge computers like the Columbia system. Doing this is incredibly expensive and risky. If the chip doesn’t work right in its first run, you have to respin and spend millions to do it again. It takes months.

A software based file system can be modified on the fly to quickly fix problems. It can evolve over time and integrate new OS features immediately, with little change to the underlying technology.

What excites me most about ZFS is we can now consider the idea of trading a very fast - and expensive - hardware ASIC for a distributed file system that uses more CPU cores, more PCIE lanes and more system memory to achieve similar results. To date, with only very basic tuning and system configuration changes, we’ve been able to achieve Titanium level performance using very similar hardware, but no RAID controller.

So does this mean we’re ready to roll out tomorrow without a RAID controller?

No. There’s still a lot of work to do. How does it handle fragmentation? How does it handle mixed loads (read and write)? How does it handle different codecs that might require hundreds of streams (like H.264) or huge codecs that require very fast streams (like 4K uncompressed)? We still have a lot of work to do to make sure ZFS is production ready, but our current experience is exciting and bodes well for the technology.

If you’d like to chat further about combined file system/logical volume managers, other storage/networking trends, or have questions regarding your workflow, contact info@small-tree.com.

Posted by: Steve Modica on Jan 21, 2013 at 12:48:34 pmComments (6) storage, networking
Reply   Like  
+3

America WAS online, until about 30 seconds ago…

Back when I was at SGI slaying dragons I had the good fortune to visit America Online. At the time, AOL was moving about 20% of the USA’s email traffic through SGI Challenge XL servers.

This was around the time they crossed 10 million (with an M) users. That was a lot back then – there were t-shirts printed. Facebook is approaching 1 billion (with a B) users today.

As you can imagine, AOL introduced some serious issues of scale to our products. We’d never really had anyone use a Challenge XL server to handle 100,000 mail users (much less five gymnasiums full of Challenge XL servers to handle 10 million). Having so many systems together created some interesting challenges.

First off, when a customer has two systems and you have a bug that occurs once a year, the two-system customer may never see it. If they do see it, they might chalk it up to a power glitch. Your engineers may never get enough reports to fix the problem since it’s simply not reproducible in a way you can recreate.

Not so with 200 in a room. You might see that once a year glitch every day. That’s a very different prospect. Manufacturing will never see that in quality control running systems one at a time. It can only be seen with hundreds of systems together.

In AOL’s case, we had the “Twilight Hang” (and no, there were no vampires that sparkled). A machine would simply “stop.” There was no core dump. It could not be forced down and there would be no error messages. The machine was simply frozen in a twilight state. This is the worst possible situation because engineers and support personnel cannot gather data or evidence to fix the problem. There’s no way to get a fingerprint to link the problem to other known issues.

SGI mustered a very strong team of people (including me) to go onsite with a special bus analyzer to watch one of the machines that seemed to hit the problem more than the others. I was there for three weeks. In fact, my fiancé’ actually flew out on the last of the three weeks because it was her birthday and I was not scheduled to be gone that long.

I can recall one highlight from this trip was me sitting in a room with some of the onsite SGI and AOL people having a conference call with SGI engineering and SGI field office people. During this call, the engineering manager was explaining the theory that the /dev/poll device might be getting “stuck” because of a bug with the poll-lock. Evidently, the poll-lock might get locked and never “unlocked,” which would cause the machine to hang. I had to ask, “Carl, how many poll locks does it take to hang a system?” There was dead silence. I came to find out that the other SGI field people on the phone had hit mute and were rolling on the floor laughing. (Thanks guys). The Corporate SGI people were not amused.

Anyhow, the ultimate cause of the problem was secondary cache corruption. Irix 5.3 was not detecting cache errors correctly, and when it did it would corrupt the result every other time. Ultimately, they completely disabled secondary cache correction and to this day, you Irix users will notice a tuning variable called “r4k_corruption.” You have to turn that on to allow the machine to attempt to correct those errors (even at the risk of corrupting memory). The ultimate solution for AOL was to upgrade to R10k processors that “correctly” corrected secondary cache errors every time.

Posted by: Steve Modica on Jan 8, 2013 at 11:09:21 amComments (1) servers, networking
Reply   Like  

Testing Strawberry by Flavoursys

I did some testing with Strawberry from Flavoursys last week. Flavoursys provides software for project sharing and project management for video post-production. I’m happy to report that this is one of the most exciting new products I’ve had a chance to work with in a long time.

What is it?

Strawberry gives you the ability to share Avid projects and media from shared storage without the usual indexing issues or potential corruption problems. It provides for user and group access, metadata based search and safe read only access to projects and media that others are working on.

How does it do it?

It’s pretty ingenious. Strawberry is a client system that sits on your network. You permanently mount your shared storage to the Strawberry system so it has full administrative access. It will work all its magic on your server via this mount.

It will create project and media sub directories for your workstations. In my very simple setup, they were edit_1 for media and edit_1p for projects. There were also directories for edit_2, edit_3 and so on.

Users will access the Strawberry server via a web browser or built in app. They login with a name the administrator provides and are then able to create and open projects. When a project is created and opened, the appropriate project files and resources are created on the “main” storage and links are then generated in the users workstation project and media directories. Those links remain as long as the project remains open in Strawberry. When the project is closed, those links are removed.

Here's a screen snap of the blank Strawberry startup window:



And here's another shot of what it looks like to create a new project (it allows you to enter a rudimentary amount of metadata which is used for project naming):



What’s great about this setup is that this same user can open other users’ projects and “add” them to his own (read only) so that he can share timelines and media that might be important. All of this is managed via these dynamic links in the edit_1p and edit_1 subdirectories.

Is it simple to use?

Absolutely. As a user, I would simply login, click “create,” fill in some basic information - like my name - and select “open.”

Once this was done, I simply opened Avid Media Composer on my client and selected the External Project type and pointed to the edit_1p directory. I could see my new (empty) project and begin to work. My media would be stored in my edit_1 media directory, safely sequestered from other users so there would be no
re-indexing issues. I was able to create a number of projects and open them all read-only as add-ons to my new project.

Is it easy to setup?

This was probably the most difficult part about using Strawberry. There were several setup issues, but I know Flavoursys is currently addressing those and will have them fixed shortly. I think the product is very useful and worth the money now. However, I’ll caution potential buyers that there is the possibility they will have to have someone in to help with configuration early on.

To use Strawberry, you have to install a 1U Supermicro chassis onto your network. It boots into Red Hat, but there’s really no other instruction. I had to email Flavoursys to figure out what to do next.

The documentation is very new and pretty raw. So even when following the instructions step by step, I hit a number of puzzle points where I was unsure what to do next. This didn’t stop me from getting it set up, but it could be frustrating for editors who are expecting a plug-and-play solution.

While the system is very elegant in how it works, they brought it to market quickly by using some Windows compatibility tricks. This means there are extra layers that could potentially confuse and complicate the setup.

For example, the actual software is running on a Virtualbox virtual machine under Red Hat Linux. So the server is a Windows 2008 server running under Linux. Your users won’t be connecting to one of the Linux addresses you setup when you connected the machine. They will be connecting to a Windows Virtual address.

Additionally, the client software runs under Microsoft’s Silverlight. So all of your clients will need the Silverlight plugin to get to the Strawberry user interface. I didn’t find this to be a problem - I already had it installed on my laptop- but I had some difficulty getting Firefox to see the plugin on my test iMac (Safari worked fine).

As I understand it, Flavoursys is working on a native version of all of this to improve the performance and simplify the product.

My recommendation:

This is a great and innovative product. It will allow for existing Avid stations to exist in broader, heterogeneous environments and reduce significantly the amount of money small shops have to pay for Avid compatible shared storage. Small Tree is now offering Strawberry with its products for this very reason.

Strawberry also offers a great upgrade path for those that are not using Avid today, but believe they will need to in the future. These shops can go out and purchase NAS based storage now, knowing that down the road they can add Avid sharing capabilities without doing a forklift upgrade.

About the only caveat I would put in place on a Strawberry purchase would be that you’ll need someone with strong sysadmin skills for the setup. There are a number technical steps and concepts (like virtual machines and NFS vs Samba mounts) that you’ll want configured by a Pro. Once that work is completed, it should be smooth sailing.

For more information on workflow solutions, visit www.small-tree.com or contact info@small-tree.com.


Posted by: Steve Modica on Jan 3, 2013 at 2:38:12 pmComments (4) Networking, Avid
Reply   Like  

Your dishwasher is broken…

I started in the computer industry around 1988 as a computer engineering co-op student with a small company called Herstal Automation. Herstal built memory boards for very old HP 1000 A600 and A900 computers. These computers were some of the very first real-time computers ever made. Auto companies and the medical industry used these to monitor real-time processes like engine performance or patient vital signs.

To give you some idea how old these were, the machine had toggle switches on the front so you could hand enter machine code instructions one at a time. This was a good way to enter tiny little test programs or force the machine to boot when you were stuck.

Typically, I would build up memory boards and boot the machine to test. Sometimes, a board would fail. When that happened, I would use the front panel toggle switches to put in a small assembly instruction program that would write all 1’s into memory. Then I would dump the memory and see if there were any single zeros (bad data line) or entire words that were 0 (bad address line). I could almost zero in on the exact pin or chip that was failing and replace it. I could unsolder a bad chip and replace it so cleanly that you couldn’t tell it wasn’t machine soldered.

One day, I recall a board that was behaving very strangely. I couldn’t seem to get it to power on. There simply were no lights.

I pulled the board out and did a visual inspection. This is tricky because if you look at a pattern (like pins on a board) the eye will see a clean pattern. It’s very easy to miss a bent pin. Our brains fool us into “seeing” the pin even when it’s not there. One has to intentionally look at each and every pin.

Even with all that inspection, I could not see a problem. Finally, I pulled out my voltmeter and started looking for a bad trace. Maybe one of the caps or resistors on the board was simply bad and was not passing current through.

I touched the power pin and the power lines on the downstream chips and could not read a connection. Hmmm… I started working backwards, closer to the pin, but still no connection. Eventually, I had both leads on the pin. Still no connection! The gold pin on the edge of the board was not conducting electricity.

I have to admit, I was not a good physics student. I hated physics. However I did very well in electronics and I know that gold conducts electricity. In fact, gold is great at conducting electricity. So I did the scratch test. I took a tiny little screwdriver and gently slid it against the pin (scratching gold pins on these boards was a definite no no). A thin film of plastic bubbled up.

Then I knew what was wrong.

In those days, electronic boards were assembled by hand by installing parts and bending the pins down to hold them in place. The pins were then nipped off and the boards were sent to a “wave solder” facility. These places would pass the board over a flowing “wave” of molten solder and all of the pins and pads would be soldered at once in a very uniform and reliable manner.

There was one problem with wave soldering in that if you had gold connector pins on your card (like a PCI card), the solder would stick to the gold and ruin it.

To avoid this, wave solder companies would use a special water-soluble tape on these edge connectors. After the wave solder was complete, the boards were put into an industrial dishwasher and the tape would dissolve. (This would also clean the boards and make them look nice and new).

So the reason I had this layer of plastic was that our wave solder company’s dishwasher was broken. It was not heating the water enough to completely dissolve the protective tape layer. I confirmed with a quick phone call and used a tooth brush and some hot water to fix the rest of the batch.

Obviously, in this day and age, this is a pretty rare problem. Boards you buy for your edit stations are built in large quantities and Quality Control Tested on pin grids and test rigs to quickly rule out any obvious problems.

That being said, there are things you need to consider when dealing with large PCIE boards that you might have to plug into an older machine:

1. Make sure you aren’t carrying around any excess static that might zap the board. If the vendor provides one, put on an appropriate grounding strap when installing a board.

2. Be very careful when installing not to knock loose any surface mount components. In the old days, things were soldered right through the board, but today, they are only surface soldered. If you bump one of those little chips too hard, you will knock it off. Depending on what it is, your board may not work at all, or will be intermittent.

3. What carefully for interference. Often, large graphics boards have huge heat sinks or cases and fans on them. Make sure none of these devices is touching a neighboring board. This could lead to electrical shorts or overheating

4. Make sure all wires and cables are strapped or tied down! If you leave unused connectors hanging, one day they are going to end up hitting a fan or a hot component and melting.

5. The 5Volt and 12Volt supply lines inside most computers aren’t going to kill you, however, imagine what might happen if you shorted a ring or watch against a 12V line. It would get extremely hot very quickly and may even melt (while touching your skin!) So don’t take these relatively low voltage supplies lightly.

When working with complex electronic assemblies, following these steps can save you a lot of time and frustration down the road.


Posted by: Steve Modica on Dec 26, 2012 at 2:16:10 pm networking, PCIE
Reply   Like  

Hurry up and wait

Maybe you’ve heard that expression before. “Hurry up and wait.” Military guys love to quote that. It’s a reference to the military giving soldiers very important things to do, but then having them sit around idly because the people they are supposed to be doing them with aren’t ready.

Small Tree’s been a Prime Military Contractor for about six years now, and as the lead investigator on most of our projects, I’m no stranger to this. I’ve waited for hours at bases (and at the Pentagon) for someone to come escort me to wherever it was I was supposed to be. You just aren’t allowed to walk into these places.

In one particular instance, I can remember the sheer terror of being on the other side of this equation. Imagine what it’s like to be the person that all the soldiers are waiting for.

In our case, Jeff Perrault and I were at a huge military base helping to test some of our new routers.

The project was simple for us. We built a little router that could connect to a couple of radios. We enabled some basic forwarding and routing, got all the security and IP addresses setup, and poof, there’s data routing between radio networks. Our device was called “Chloe” and the original is sitting on my desk right now showing her battle scars.

The previous week of testing had gone wonderfully. Everything worked and it worked all day. We were thinking about going home. Today’s plan was to line up 200 people, put them in vehicles and run the full test. They were using the same radios, the same routers and the same vehicles, just adding more people to the network. What could go wrong?

Cut to an earlier meeting. In this meeting, we were told of a previous day’s “Vehicle Summit” meeting where it was decided to rewire the vehicles. They no longer wanted to use USB. USB was unreliable. They wanted to use Ethernet. The solution? Put Ethernet dongles on our router and rewire the trucks.

My cell phone rang and it was one of the guys in the lead truck. He was sitting at the end of the road. He was at the front of a column of 200 people. “The router,” he told me, “is not routing.” Jeff and I ran out of the building with our stuff and moved as quickly as we could to the front of the column. We had laptops and wires hanging out as we typed and looked carefully to figure out what happened. The column was sitting in the sun waiting….and watching us.

In this case, changing to Ethernet dongles messed up our configuration script, which really wanted all the ports to come up the same way each time. We changed the MAC addresses so they matched up and everything started working. People in the column started seeing the little dots show up on their displays.

We learned something that day. If we were going to create a router for the military that soldiers would deploy, in vehicles that could be rewired overnight, with no one around who knew how to use a laptop and serial port, it had to be zero configuration. Needless to say, our LEXII router that came out the following year did not require any user input. If you connect it, we’re going to route it whether you like it or not!

Posted by: Steve Modica on Dec 18, 2012 at 11:43:31 am networking
Reply   Like  

19 Hours of Panic

For you old timers, you may remember a story where the USA’s largest Internet service provider went down for 19 hours. For you younger folks, you can read about it here:

http://articles.philly.com/1996-08-08/business/25644646_1_outage-america-on...

I would hardly know about this story myself except that I received a panicked phone call from the SGI office that served AOL that same day. In that phone call I was asked, “Could installing an SGI server on AOL’s network bring down the entire network?”

Hmmmm… normally, I would have said “no.” Networks should be resilient. People make mistakes on networks all the time. Sometimes they put systems on that have the same IP address. Sometimes they set their subnet or broadcast addresses incorrectly. These simple errors don’t take out buildings.

Even the most egregious error I could think of - somehow looping or routing a switch back to itself - shouldn’t take out the entire network. It might hang a “dumb” switch, but AOL used expensive switches with Spanning Tree Protocol that would prevent such loops. So even if the onsite people had made the very improbable mistake of making the SGI a router and somehow sticking two of its ports onto the same switch, I could not see AOL - as in, the entire company - going offline.

I got off the phone and something started to nag at me. I remembered a case with Chrysler the year before where they had deployed some SGI workstations on their CAD network. When they turned the SGI systems on, the IBM systems would drop off the network. The upshot was that SGI systems were a lot more aggressive when sending packets and we could easily keep the IBM systems from “getting a word in edgewise.”

Could this be it? Did AOL set up a system somewhere that was handling all their DNS or something and we were forcing it off the network?

This is where politics comes in. If we shut off the SGI system and the network “magically” came back, then what? At best, AOL would have been extremely leery about letting SGI add any more servers. At worst, the headline the following day would have read, “SGI takes out entire AOL network!” Dumb luck and coincidence might have put SGI on the front page in a very unfavorable light.

Ultimately, before we could gather any traces, AOL figured it out. The complete solution is explained here: http://news.cnet.com/AOL-mystery-explained/2100-1023_3-220635.html.

As it was explained to me, a redundant router was put in place alongside the existing router to handle AOL’s network traffic. That “new” router had an empty routing table. He decided to push his routing table down to all the sub-routers on AOL’s network and essentially erased their entire distributed routing table. As I recall, admins were logging into routers and manually entering routes to allow different floors of the building to reattach so they could get to other routers and fix them until they finally got enough connectivity to get back to the main routing tables and recover everything and push it all back down.

That was a very bad day for those guys, but it was no picnic for me either. ☺

Posted by: Steve Modica on Dec 12, 2012 at 3:05:03 pm networking
Reply   Like  

Busy busy busy

When I used to work at SGI, I would often wonder what "C" level officers did. I once got to ask Ed McCracken what he spent most of his time doing day-to-day. At the time, he was CEO of SGI.

His answer was that he was currently spending a lot of time talking to Congressmen trying to convince them to stop propping up Cray as a national asset. In hindsight, perhaps buying Cray was not the best idea.

As the Chief Technical Officer of Small Tree, which is a much smaller company, I have to wear a lot more hats. I thought I might include a list of the things I've been up to over the last month.

Deer hunting (actually, just watching this year)
Grocery shopping
Barbecuing
Evaluating Titanium follow on chassis designs
Helping select next generation Software Defined Radio development platforms for the Army
Working on Adobe performance issues
Evaluating a new Avid sharing product (that works great!) called Strawberry
Evaluating a new Digital Asset Manager (that also works great) called Axle
Discussing our new high performance iSCSI products with partners
Fixing the phone system
Testing Thursby's Dave software with Avid
Helping customers with Small Tree products
Running barefoot (I run barefoot and in Vibrams.... a lot)
Working on a new voice router design for the US Rangers
Helping my kids with math homework
Processing firewood for the winter
Breaking up the recyclable cardboard boxes
Writing up an NAB presentation proposal
Prepping for a visit from the Soldier Warrior team of the US Army
Small Tree Board of Directors meeting
Christmas shopping

There’s never a dull moment.

Posted by: Steve Modica on Dec 3, 2012 at 9:48:38 amComments (1) storage, networking
Reply   Like  

Clone Detectors!

Not the Star Wars kind tho...

Back when cell phones were new, a number of vendors had "clone" problems. People were cloning phone serial numbers so they could get free cell service.

To combat this problem, the cellular companies built up "Clone Detector" systems. These were massive database servers that had to be extremely fast. They would monitor all in process calls looking for two that had the same serial number. If they found a match, that phone was cloned and both were taken out of service.

SGI's systems were uniquely qualified to handle this work. The company had some stellar Oracle and Sybase numbers and offered these vendors a 10X speed up in clone detection.

The phone call came in from Florida during the test phase of the new system. The sysadmin called me up and told me that when she dumped a 25% load on the system, it slowed down very quickly. If she put a full load on the system, it stopped.

This was puzzling. I'm not a database expert, so I spent time looking at the normal performance metrics. How busy are the CPUs? Not very. How busy are the (massive) RAID arrays? Not very. How much memory is in use? Not much. Nothing was adding up.

I started watching the machine’s disk activity during the 25% load. I noticed one disk was very busy, but it was not the RAID and shouldn't have been slowing the machine down. I asked the sysadmin about it. She said it was the disk with her home directory on it and it shouldn't be interfering with the machine’s database performance. That answer nagged at me, but she was right. If the database wasn't touching the disk, why should it matter? But how come it was so busy? There was a queue of pending IOs for Pete's sake! Was she downloading files or something?

I asked her if I could take a look at the index files. Index files are used by a database to keep track of where stuff is. Imagine a large address book. I wanted to see if the index files were corrupted or "strange" in any way. I thought maybe I could audit accesses to the index files and spot some rogue process or a corrupt file.

What I found were soft links instead of "real" files. For you Windows people, they are like Short Cuts. On a Mac you might call them Aliases. She had the index files "elsewhere" and had these aliases in place to point to them. She told me "Yeah. I do this on purpose so I can keep a close eye on the index files. I keep them.... in... my... home... oh!"

So her ginormous SGI system with hundreds of CPUs and monstrous RAIDs was twiddling its thumbs waiting for her poor home directory disk to respond to the millions and millions of index lookups it could generate a second. It fought heroically, but alas, could not keep up.

Some quick copies to put the index files where they belonged and we had one smoking clone detector system.

Posted by: Steve Modica on Nov 26, 2012 at 9:52:33 am storage, networking
Reply   Like  
+1

Bugs....

Many years ago when I was a "smoke jumper" support guy for SGI, I got to see some of the strangest problems on the planet. Mind you, these were not "normal" problems that you and I might have at home. These were systems that were already bleeding edge and being pushed to the max doing odd things in odd places. Further, before I ever saw the problem, lots of guys had already had a shot. So reinstalling, rebooting, looking at the logs, etc., had all been tried.

One of my favorite cases was a large Challenge XL at a printing plant. It was a large fileserver and was used for storing tons and tons of print files. These files were printed out, boxed and shipped out on their raised dock.

Each night, the machine would panic. The panics would happen in the evening. The machine was not heavily loaded, but the second shift was getting pissed. They were losing work and losing time. The panics were all over the place - Memory, CPU, IO boards. By this time, SGI had replaced everything but the backplane and nothing had even touched the problem. The panics continued.

Finally, in desperation, we sent a guy onsite. He would sit there with the machine until the witching hour to see what was going on. Maybe a floor cleaner was hitting the machine or there were brown outs going on. We felt that if we had eyes and ears nearby, it would become obvious.

Around 8pm that night, after the first shift was gone and things were quiet, our SSE got tired of sitting in the computer room and walked over to the dock. He was a smoker and he wanted to get one more in before the long night ahead. The sun was going down, making for a nice sunset as he stood out there under the glow of the bug zapper (this happened in the south where the bugs can be nasty).

As he watched, a fairly large moth came flitting along and orbited the bug zapper a few times before *BZZZZZZT* he ceased to exist in a dazzling light display. It was at that moment when the Sys Admin (who was keeping an eye on the machine during our SSE's smoke break) yelled over to him "HEY! The machine just went down again".

Yes folks, the bug zapper was sharing a circuit with the SGI machine. One large insect was enough to sag the circuit long enough to take the machine right down. Go figure.

Posted by: Steve Modica on Nov 19, 2012 at 11:10:31 amComments (3) storage, networking
Reply   Like  

Avid shared projects

Ever since Apple blew up the happy world of FCP 7, we've been running into more and more people moving to Adobe and Avid.

Adobe's been pretty good. I like them a lot and their support guys (I'm talking to you Bruce) have been awesome.

Avid on the other hand, is tough. Shared spaces cause reindexing and external projects won't save natively to shares spaces. Our customers have mostly worked around this by storing projects locally, using multiple external volumes for media, or using AMA volumes.

I finally had a chance to explore this External project save issue in great detail today.

It turns out there's nothing specific Avid's doing that would prevent them from saving a project externally. They stat the file a few times and give up. It appears as if they are simply not allowing saves to shared protocol volumes (samba and afp).

My solution was simple: I created a sparse disk image on the shared storage, then mounted it locally.

This worked great. I could point my external project to it and it would save correctly. I could link in my AMA files and use them and I can trust that OS X isn't going to let anyone else mount that volume while I'm using it! When I'm done, I exit and unmount the volume. Now anyone else on the network can mount it and use my project (you can even have multiple users by doing read only mounts with hdiutil)

Further, this method can be adapted to just about any granularity you'd like.
For example, if your users hate the idea of creating disk images for every project, just create one very large (up to 2TB) disk image for their entire project library. You can automount that and just let them use it all the time. You could also have per project, per user or per customer images as well.

Let's face it. Storage is expensive and we all know what disks and motherboards cost. No one wants to pay three or four times what this stuff costs for a vendor specific feature. Hopefully, this trick makes it easier for you to integrate Avid workflows into your shop if the need arises.

Steve

Posted by: Steve Modica on Oct 30, 2012 at 12:30:01 pm storage, networking
Reply   Like  
+1

Scaling IT

Just about everyone can have a free web page. You get them free when you open cloud accounts or purchase internet service. This has lead to a proliferation of cat pictures on the Internet.

Back in the 90s, when it cost a little more to get on the Internet, the idea of personal web pages was just beginning. One very large ISP (Internet Service Provider) that used SGI systems wanted to sell personal websites. They felt SGI's Challenge S system was the perfect solution. They would line up hundreds of these systems, and each system could handle several sites. SGI did indeed set several website access records for handling the website for "Showgirls,” which, as you can imagine, had a racy website.

Fast forward a few months and there are 200 systems lined up in racks handling personal web pages. Then I start getting phone calls.

"Hey Steve. These guys are filing cases about two or three times a week to get memory replaced. We're getting parity errors that cause panics about two or three times a week."

I fly out and start looking carefully at the machines. The customer had decided to purchase third party memory (to save money) so they could max out the memory in each system. Each machine had 256MB of RAM, which was a lot at the time. This was parity memory, which means that each 8 bits has a parity bit that is used like a cheap "double check" to make sure the value stored is correct. The parity bit is flipped to a 1 or 0 so that each 8 bits always has an even number of 1s in it. If the system sees an odd number of 1s, it knows there's a memory error.

I looked at each slot. I looked at ambient temperature. I made sure the machines were ventilated properly (including making the customer cover all the floppy disk holes since they did not have floppies installed, but had neglected to install the dummy bezel). No change. Parity errors continued and clearly there was an issue.

Going back to the memory vendor and the specs on the chips, we started doing the math.
The vendor claimed that due to environmental issues (space radiation etc) one should expect a single bit parity error about once every 2000 hours of uptime for each 32MB of memory. Half of these errors should be "recoverable" (i.e., the data is being read and can be read again just to be sure), but the other half will lead to a panic. They do not mean the memory is broken, but the errors should be rare.

So let's do the math: 256MB/machine (so that's 8X 32MB).
Hours of uptime? (These machines are always up): 8760hours
How many total parity errors: 35 per system, per year, with half of them being "fatal." So, that’s 17 panics per system per year. They had 200 systems. That's 3400 panics a year in that group of systems or roughly 10 per week?!

Consider this when you start to scale up your IT systems. How many machines do you have to put in a room together before "once a year" activity becomes "once a day?”

Posted by: Steve Modica on Oct 10, 2012 at 3:56:04 amComments (4) storage, networking
Reply   Like  

10Gb is Coming

You've all read that 10GbaseT is on the way. It's true. Very soon, you will be able to plug standard RJ45 connectors (just like on your Mac Book Pro) into your 10Gb Ethernet cards and switches. You'll be able to run CAT6A cable 100m (assuming the runs are clean runs) and have tons and tons of bandwidth between servers and clients. Who needs Fibre Channel anymore?!

But with the widespread migration to 10Gb, you may have a plumbing problem my friend.

Many years ago, I had the privilege of supporting three of the large animation studios in LA that were trying to use their new RAID5 arrays and run OC-3 and OC-12 right to their desktops. These two ATM standards were capable of 155Mbits and 622Mbits, respectively (this was before the days of Gigabit Ethernet). Everyone expected nirvana.

They didn't get nirvana. In fact, they found out right away that three clients ingesting media could very quickly "hang" their server. Within about 30 minutes it would slow to a crawl and sit there. They could not shut it down. Shutdown would hang. What was really happening? The machine had used all of its RAM collecting data and was unable to flush it quickly enough to their RAID. The machine was out of IO buffers and almost completely out of kernel memory. The "hang" was simply the machine doing everything it could to finish flushing all this unwritten data. We had to wait (and wait and wait).

Further, we discovered that with only three clients we could quickly start generating dropped packets. ATM had no flow control and so too many packets at once would result in dropped packets. Since the clients were very fast relative to the server, it didn't take more than a few to overwhelm it.

Similarly, as we all start to salivate over 10Gb to our Mac Books, iMacs and refrigerators, we should consider how we're going to deal with this massive plumbing problem.

First, you *will* need some form of back pressure. The server must be able to pause clients (and vice versa) or these new 300MB/sec flows are going to overwhelm all sorts of resources on the destination system.

Second, just because the network got faster, doesn't mean the disks did. In fact, now your users will have ample opportunity to do simple things like "drag and drop copies" that will use up a great deal of the resources on the server. A simple file copy over 10Gb at 300MB/sec bidirectional could overwhelm the real-time capabilities of a normal RAID. The solution lies in faster raids, SSDs and perhaps even 40Gb FCOE raids for the servers. (That's right, 40Gb FCOE raids)

So as you consider your 10Gb infrastructure upgrades, make sure you're working with an experienced vendor that knows about the pitfalls of "plumbing problems" and gets you setup with something that will work reliably and efficiently.

Posted by: Steve Modica on Oct 4, 2012 at 11:04:51 amComments (6) networking, storage
Reply   Like  
+2



Steve Modica's place to pontificate.
Blog Feed  RSS


FORUMSTUTORIALSFEATURESVIDEOSPODCASTSEVENTSSERVICESNEWSLETTERNEWSBLOGS

Creative COW LinkedIn Group Creative COW Facebook Page Creative COW on Twitter
© 2014 CreativeCOW.net All rights are reserved. - Privacy Policy

[Top]