Hi, I’m George Crump, lead analyst, Storage SwitzerlandAs we talk to end users they’re looking at cloud backup, especially as we move up outof the sort of the consumer use-case and now we are more thinking how to use this in thedata center, and those data centers that are considering it are getting bigger and bigger. But one of the things we almost always assume happens in that is you have to have an applianceor a hybrid cloud backup. Well, that might not always be the case. So to join me in this conversation is Chris Schin, he is the VP of Products at Zetta. Chris, I know you guys have a different approach to this. Why don’t you talk us through it?Sure. So one of the core product principles of theZetta DataProtect solution is that we set out to make sure that all the solutions weput in place require no on-site appliance at the customer premise. No physical appliance. No virtual appliance. Because we think it’s a bit of an awkwardor clumsy way to do backup, to require that in every case. First, it obviously introduces an additional cost factor, to install them all, to managethem all, to run them all. Second, it also increases the complexity ofthe deployment. Picture if you will a customer of ours who’sgot, some MSP customer that has about 140 end-customers, each one of which has one server. The last thing you want to do is double the server footprint in 140 different offices,just to do backup. You’ve got complexity, cost, maintenance thingsto worry about – all kind of issues. Next – upgrade. If you reach the capacity of your appliance, you either need to get a second applianceand put them together somehow or upgrade the appliance. Restore rigidity. So, what do I mean by that?I mean it’s necessary to take the data from the primary to the appliance and from theappliance to the cloud on backup, but you have to go back through the appliance on restoreas well. If the appliance is what went out, or if theoffice went out, you have to wait for a new appliance to get installed before you cando the restore. So, all this is great – the whole conceptof no-appliance, but you can only do it if you have invested time and energy to WAN-optimizeyour solution. So, WAN optimization I think would be a realkey component, you’re right, because that’s what the appliance guys will say, hey that’swhy we did it, so we can optimize the WAN, right?Exactly. Exactly. So WAN optimization is a core to what we dohere, at Zetta, and there’s a reason for that. When I first entered into the cloud backupfor enterprises, online backup for enterprises market, back in 2004-2006 time frame, thedirty little secret was – all the existing enterprise online backup players were reallyLAN-based products that they had bolted the internet onto. Couple that with the fact that the typical customer they were selling into had a bondedT1 line at best. It just wasn’t working, right? When I came to Zetta, we took a differentapproach. We spent maybe the first 18 months of ourdevelopment cycle working on WAN optimizing our solutions. So, before we even got around to backing up structured data sets, and things like that,we had put a ton of IP and intellectual property, and patents into making a solution that reallyturns the Internet into a data conduit that can protect large datasets. If you think about server images, for example, they get large enough, multi-terabytes large,that without WAN optimization you really can’t back them up unless you have an appliance. So you need this, to have this. Right. Well, without giving away the secretsauce give me some like high level things you did to optimize that connection. Sure. So, there are tons of different, sort of familiar concepts that we deployed in asort of unique new way. OK. They go to things like massive amounts of parallelism over the Internet, a lot of localizedchange detection, aggressive change detection so that we minimize on re-sync. We do thingswith dynamic TCP optimizations depending on TCP/IP window size optimization, dependingon the type of dataset we’re seeing, and so on and so forth. And then, every solution we’ve built on top of that, we’ve had to be innovative aboutthe way we approach that particular problem. Whether it was a SQL database, or in thiscase, in our most recent release, Windows Server image backup, where you might havea two terabyte drive but only 50 gigs is used. Well, you don’t want to send 2 terabytes overthe Internet. So you have to be clever about things wheneveryou are tracking the problem. OK, that makes sense. So, let’s talk a little bit about the server image capability and what that brings. Sure. Okay. Well, it’s important to start by noticing that we did everything using standard Windowstechnology. So the Windows, the server image backup, thatwe’re releasing right now is a Window-centric solution. So, this is not tied to VMware, this is not tied to Hyper-V, this is not tied to Xen ora physical box. It’s a Windows solution, regardless of theplatform you’re deploying on we use Windows technologies to snapshot the server, we useWindows technologies to back up the server, and then, once it gets stored with us, westore in standard Windows VHD and VHDX formats. And the reason for that is it makes the restorevery simple. Right, I am sure. With the VHD you can mount it and read it, you can boot it into Hyper-V, you can convertit into VMDK and put in a Hyper-V farm or you can use standard recovery technologiesand burn it back onto a physical box and be off and running. This is sort of the key tothe solution that makes everything. Well, and that kind of increases. . . you weretalking about rigidity before, this is obviously very flexible. Yes, very flexible to restore. You can go from virtual back to virtual, fromvirtual to physical, from physical to virtual; all these different things can be accommodatedby using the standard technologies. And my sense would be that you would alsobe more immune to, you know, incremental upgrades that Microsoft might put out to Windows. Right. Right. Exactly. Exactly. Because you’re kind of following the rules. We are following the rules, right. Nothing we do, this is very different fromthe old school BMR approach which was proprietary, process- intensive, yet it never worked, becausesomething was wrong with the hardware etc. If you just stay within the world of Windows,and follow their path, the world gets a lot simpler. And, and so with this capability, kind of walk me through how that process works ifI’m a guy and I want to do Windows Server image backups. How do I make that happen. Let me take a step back and actually, sort of, describe the full capabilities of whatwe’ve got today. Our primary footprint on the customer sideis called ZettaMirror. So this works off of both physical and virtualservers. It works off of Linux, Mac, and Windows. Now I’ve talked a lot about Windows today because it turns out that about ninety percentof our customer systems are Windows. And then we started out with backing up files,then we went to structured datasets like databases or system state files and things like that. Than we did NAS boxes on the network and now we’re doing full server images. And the other thing to remember is that it backs up both to the cloud, but you can alsostore backups locally on any place you have excess disk. So that’s, to clarify though, cause that’s important, not necessarily an appliance, that’san extra disk that you have. Yeah, sure. Some people just plug the USB Drive in or they have a centralized NAS share that hasextra space. Whatever they want to use. We’ll put the back up there and they can restore it from there. Okay. A tape even, if you want to put it on a tape. Okay. So it’s really kind of a full solution fora mid-market customer who has these various technologies deployed throughout. Regardless of what vertical they happen to be a part of. So what are some other structured databases you protect?The standard ones you’d think of, we do SQL databases, we do Exchange databases, obviouslywe do like system state files and Windows machines and things like that. We starting to get asked for other things like MySQL and things like that that we’restarting to work in today’s solution as well. So, as I recall, you guys move pretty quicklythrough getting those supported. Yeah, again from here on down, well actuallyhere, here, and here, if you just do what Windows is already built into the operatingsystem, you can back it up and restore pretty quickly. Secret sauce for Zetta is really in the WAN optimizing, taking these technologies, detectingwhat’s changed and sending those changes off to the cloud, as efficiently as possible,so that we always have the most recent state of your data, as well as all the versions. And so then, finally, just wrap me up, cause I am really interested in the server imagethings. So once I have that there, if I’m a guy, I’musing your whole set and I have a server down what’s that look like for me. Sure. Well, so, first of all people that have beenusing us, because we’ve had this out on a market since about March in in beta form,they’re making decisions about how to best backup their data so that it’s most easilyrestored. If you’re backing up a fileshare, you probablywouldn’t take a full server image. You could extract the file out of the serverimage, but you probably just point to the file. Similar, if you a got a shared database server, etc. But there are certain server images where it makes sense to get the entire, a consistentstate of the whole box, like domain controller, for example. What they would do is they can either restore it back using, again, our WAN optimized technologiesinto their environments and spin it back up, or they could use any of a number of publicclouds, port it over there and spin up on a rented box somewhere on a virtual image. Because it’s a Windows machine, it doesn’t care what it’s running on. Right. It’s pretty easy to deploy in a bunch of differentscenarios. Yes, so from flexibility standpoint againif you have a disaster, being able put it out on somebody else’s public cloud and bringit up is a huge capability. Exactly. Chris, anything else? The only other thing I’d like to mention issort of outside the scope of the image stuff or any of the rest of I’ve talked about, ithas to do with another layer of security that we’ve added in 4. 5, and that is just two-factorsecurity. So our data center infrastructure has alwaysbeen very highly locked down: multiple tests, all kinds of audits, everything you’d expect. But our web portal, where customers access their data – their backups, their monitoringand management information, all that stuff was always a username-password access website. Now we’ve added another layer of authentication using Google authenticator. You put in your username and password, you’re prompted for a six-digit code, log in to youriPhone or your Android phone, click on the authenticator app, every 60 seconds there’sa new code, you type it in, then you get access into the environment. So with all we’ve been reading about this rash of compromised credentials, we decidedto add a two-factor with 4. 5. That’s good idea. Cause I think that’s a that’sa big issue that kind of gets people worried about the cloud is ‘what does my securitylook like. ‘ Hey Chris, thanks for joining us. Thank you, George. I’m George Crump, lead analyst, Storage Switzerland. Thank you for joining us.