Jul 312013
 

For those that may have missed it, VMware snuck out a point release of Mirage labeled 4.2.2 which has replaced the 4.2.0 binaries (even though many of the labels on the download site still state 4.2.0.)  Over the last month, I have had a few customers ask how to do this upgrade.  If you take a look at the Administrator’s Guide or the release notes for 4.2.2, you will find information on installing Mirage to a net-new system but there are really no steps to describe how to upgrade an existing installation to the latest release.  Until now.  This little nugget of information can be found in the recently released Mirage Reviewer’s guide as well as a recently released KB Article.

The procedure looks like this (copied from the Reviewer’s Guide as of the time of this writing, Page 29):

The upgrade procedure to Horizon Mirage 4.0 involves uninstalling the prior Horizon Mirage components and then reinstalling the 4.0 versions.

Uninstall the Horizon Mirage datacenter components in the following order:

  • All Mirage Servers
  • Mirage Management Console*
  • Mirage Management Server

* To uninstall in Windows 7, use the Windows Control Panel > Programs and Features (Add/Remove Programs for Windows XP)

Note: Uninstalling the Mirage Servers does not remove any data from the storage volumes that were connected to the Horizon Mirage System.

To install the Horizon Mirage components with the new MSI’s in the following order:

  • Mirage Management Server
  • Mirage Management Console
  • Mirage Servers

The SSL and port configurations are not preserved; you need to reconfigure these after you install the new versions of the Horizon Mirage components.

After the upgrade of the datacenter components is complete, when a Mirage-endabled endpoint connects to the network, Horizon Mirage automatically upgrades the Mirage Client and prompts for a reboot.

Not too bad of an upgrade but one to be aware of none the less as the previous versions need to be uninstalled.  Make sure to reference the detailed KB article with more detail and caveats to avoid.

 

Jul 222013
 

Horizon Mirage is a part of the Horizon Suite from VMware and it is generating a lot of buzz.  I’m not going to go into the benefits why, you can read the link I’ve provided for that.  However, one of the most amazing things about Mirage is that it user a technology called sourced-based deduplication in order to backup all of the desktop endpoints.  Let’s talk about that technology, how it works and when it works best.

Source-based deduplication works by having a server in the datacenter with a lot of capacity attached to it.  We’ll refer to this server as the “repository.”  Now for the endpoints (which, in the case of Mirage, are Windows-based desktop/laptops.)  The client will begin by taking backups of the endpoints (Mirage calls them snapshots) and copying them to the repository.  It’s this process and how it works that is so amazing.  You would immediately think that when I take a backup of a endpoint that is 10GB on disk, the system will send 10GB over the network.  For the FIRST machine that you backup, it typically does.  It sends practically the whole image of the endpoint to the server for the first endpoint you backup.  It’s when you go to backup the second endpoint where the magic starts to happen.  Once the first endpoint has been “ingested”, for any additional endpoints added, the repository will use the data it has already seen to comprise all future backups.  I know this can be somewhat confusing, you can look at this article for some comparisons of different deduplication technologies.  For our example, let’s go a little deeper into exactly what happens during this process.

We will begin with the first Windows desktop that is 10GB on disk total and back it up.  The repository will “ingest” the files from the endpoint.  When it does this, it runs a hashing algorithm against the file to give it a hash code.  Once it does that for every file, the client also breaks the file into “blocks” or “chunks.”  It then runs a hashing algorithm against those chunks.  After all this it stores the backup down on disk in the repository.  Now, for the next (and every subsequent) client we want to backup or capture:  The client will ask the server for it’s hash table of files.  This is a small amount of data sent from the server to the client because the hash table is a list of all of the hash codes for all of the files in the repository not the actual data in the files.  The client then takes this data and analyzes each file on the second endpoint’s file system.  It develops a list of files that it has never seen before in the repository (and tells the repository which files are on this endpoint that the repository has seen before.)  Typical we see about 90-95% common files between images.  This is where it starts to get even more crazy efficient.  So the client has figured out which files the server already has in the repository and has told the server a list of those files that are on Endpoint #2 that the server has seen before.  Now the client looks at the files that the server has not seen before.  Let’s suppose there are 100 files that list that the server has not seen before.  The client will separate those files into blocks at the client (this is why it’s called sourced-based, the majority of the processing and checking for deduplicated data happens at the enddpoint, not the server).  So the client has separated the 100 files into blocks and runs the same hashing algorithm on the blocks.  Now the client compares the blocks to the blocks the server has in the repository and develops a list of blocks that the server has not seen before.  Let’s say the client finds 10 blocks that the server has never seen before.  It tells the server to mark down all of the blocks that are on this endpoint as being part of this endpoints backup.  Note: to this point in the process, the client has not sent any of the backup actual data to the server yet.  The last step is to take the blocks of files that are unique to this endpoint and compress them and send them to the server for storage, thus completing the backup, inventorying all of the common data and sending the unique data.

Whew!  What does all this look like in reality?  Let’s take a look at this log entry from a Proof-of-concept we are running for a customer right now:

Screen Shot 2013-07-21 at 10.17.42 AMThis is a initial first upload from a client to the Mirage repository.  This endpoint is running a Windows 7 base image.  It is about 7,634 MB on disk (listed by the total change size.)  Since this is the first time this endpoint has been backed up, all of the data on the endpoint is listed in the total change size.  On all subsequent backups, this capacity will be the size of the files that have changed since the last backup.  The next statistic is the killer number: Data Transferred is 29MB!  Mirage took a full backup of this system’s 7,634 MB and only sent 29MB (the unique data) over the network to the repository!

Here’s how it got there: Mirage inventoried 36,436 files on the endpoint that had changed since the last backup (all the files on the endpoint had “changed” since there was no previous backup of this endpoint.) Mirage ran the hash on all of those files and found that there were 2,875 files that it had not seen before in the repository  (the Unique Files number).  These 2,875 files totaled 221MB (the Size after file dedupe number).  Then Mirage pulled those files apart and looked for the blocks of those 2,875 files that it had not seen before.  Once Mirage found those unique blocks they wittled down the 221MB of files that were unique to 95MB of blocks that were unique (the Size after Block Dedupe number).  Mirage then takes the 95MB of unique blocks (which is the real uniqueness of this endpoint) and compresses it.  Every single step in processing at this point has happened at the client.  The last step is to send the unique data to the Mirage Server (repository).  This data sent is 29MB of actual data for a full backup! (the Size after compression number)  This whole process took 5 minutes and 11 seconds on the client.  This first backup of the endpoint will take longer because the hashing has to happen on all of the changed files (36,436 files for this backup).  However, all subsequent backups from this machine will only look at the files that have changed since the last backup because we already have a copy of the files that have not changed.

Where source-based dedupe works and where it does not

Sourced-based dedupe works the best when we have tons of endpoints with very similar OSes, apps and data (this is why it’s perfect for desktops and laptops).  Where source-based dedupe has it’s challenges is when the files are big and really unique.  Audio and video files are like this.  Unless the files are copies, no two video files are alike, at all.  Not all is lost if your users perform video or audio editing or just work with a lot of these files.  There are ways to accommodate that as well.  We would typically recommend using folder redirection or persona management to move those files to a network drive where we would backup with the typical methods and offload them from the endpoints.  We can also exclude certain file types from being backed up at all by Mirage.

Screen Shot 2013-07-21 at 11.14.29 AM

As shown above, Mirage includes an upload policy which allows you to set rules on file types you do not want to protect from the endpoints.  Some standard ones included already are media files (however as you see in rule exceptions, media files in the c:\windows directory will be backed up).

Mirage is definitely the way to go for any mobile endpoints or branch office endpoints where bandwidth limits and connectivity reliability make  VDI a less-than-optimal choice for the management a recoverability of these endpoints.  I don’t recommend products that don’t work as advertised.  Once the light bulb kicks on and customers understand this technology the real value of it shines thru.  Make no mistake, Mirage is not a mirage, it’s a reality and a really good one at that.

Jul 162013
 

I do a lot of work with customers who want to share files between all of their user’s devices.  There are a number of commercial solutions available on the market like DropBox, Box, SkyDrive, iCloud, or Google Drive which utilize the public cloud to provide this data storage.  Unfortunately for them, the latest revelation from Edward Snowden was that allegedly, Microsoft was working closely with the NSA to provide direct access to Office 365, Skype and Skydrive (which Microsoft has since refuted).  Wither true or not, this does not create a good public relations experience for the world of public cloud storage.

Customers that I work with are always concerned with public cloud data leakage.  Data leakage is the possible release of company information caused by the unavoidable release of control over the security of the company’s data when stored in the public cloud.  The fear is that once this data is stored in the public cloud, the customer has no control over where it is stored or who has access to it.  As Edward Snowden revealed last week, it is possible that the NSA has access to files you store in the public cloud.  The problem is not that the NSA has this access, the problem is that the NSA is not impervious to data leakage themselves, as Mr. Snowden has shown.  Even though public cloud storage companies state that your data is protected, they are required to by the Foreign Intelligence Surveillance Act court orders.  Not exactly installing me with a load of confidence.

So what’s a customer to do?  Intro: Horizon Workspace Data and Citrix Sharefile.  Horizon Workspace Data from VMware is private cloud only and does not contain any public cloud components.  It allows customers to share files between all of their user devices(Tablets, desktops, laptops, smartphones, etc) while storing the main copy of the data on private cloud servers in your datacenter.   Citrix Sharefile can store your data in the public cloud or on-premise storage zones.  However, even if you do use your own on-premise storage zones, Sharefile does house a directory inventory on the control plane in the public cloud.  So while the data can be stored in the private cloud, the directory listing gets shared with the public cloud.  Either way, the data itself is in your datacenter and not in the public cloud.

These two solutions (as well as a host of others) are looking more and more enticing to customers looking to provide access to their data for their users while still maintaining as much control as possible.  In the meantime, the public cloud alternatives will need to bandage their image for a while.  The bottom line is that there is no guarantee that our data is 100% private when it traverses the internet.  Maybe we should follow Russia and go back to using typewriters.  Or maybe we learn to accept the fact that this is the world we live in and that our data is never 100% secure.

Jul 112013
 

If you were following me on twitter today, you saw this announced and then removed.  Well, now the bits have officially dropped.  Some great new highly-requested features here (specifically Real-time-audio/video).  Here is the “What’s New” section from the release notes from on the download site:

VMware Horizon View 5.2 Feature Pack 2 includes the following new features:

  • Flash URL Redirection - Customers can now use Adobe Media Server and multicast to deliver live video events in a virtual desktop infrastructure (VDI) environment. To deliver multicast live video streams within a VDI environment, the media stream should be sent directly from the media source to the endpoints, bypassing the virtual desktops. The Flash URL Redirection feature supports this capability by intercepting and redirecting the ShockWave Flash (SWF) file from the virtual desktop to the client endpoint.
  • Real-Time Audio-Video - Real-Time Audio-Video allows Horizon View users to run Skype, Webex, Google Hangouts, and other online conferencing applications on their virtual desktops. With Real-Time Audio-Video, webcam and audio devices that are connected locally to the client system are redirected to the remote desktop. This feature redirects video and audio data to the desktop with a significantly lower bandwidth than can be achieved by using USB redirection. Real-Time Audio-Video is compatible with standard conferencing applications and supports standard webcams, audio USB devices, and analog audio input.
  • Unity Touch improvements - You can now add a favorite application or file from a list of search results, and you can now use the Unity Touch sidebar to minimize a running application’s window. Requires users to connect to their desktops from VMware Horizon View Client for iOS 2.1 or later, or VMware Horizon View Client for Android 2.1 or later.
Jul 092013
 
IOS7

Ever since it was announced at the World Wide Developer’s Conference in June, Apple’s new IOS 7 has garnered a lot of attention.  One of the Apple web pages that appeared shortly after the announcement was a page listing the features of IOS 7 that will benefit businesses.  Many of the features listed were already being achieved (to some degree) by XenMobile but are now being integrated into the IOS 7 Operating System.  This will inherently give a leg up to both parties, solidifying what XenMobile was attempting and accelerating the functionality Workspace can provide.  Let’s have a look at the feature categories that Apple is promoting for business users (the italicized text is referenced from Apple’s IOS 7 for Business web page, please refer to that page for full information from Apple.)

Open in managementProtect corporate data by controlling which apps and accounts are used to open documents and attachments. Managed open in gives IT the ability to configure the list of apps available in the sharing panel. This keeps work documents in corporate apps and also prevents personal documents from being opened in managed apps.

This basically provides MDM controls to which applications can be use to open or not open a file type in IOS.  XenMobile was already doing this in the apps it controlled and Workspace had the ability to turn this ability on or off completely from the data stored in workspace.  Both products can benefit from this additional management and control.

Per app VPNApps can now be configured to automatically connect to VPN when they are launched. Per app VPN gives IT granular control over corporate network access. It ensures that data transmitted by managed apps travels through VPN — and that other data, like an employee’s personal web browsing activity, does not.

This has to be the most underrated business feature of IOS7.  This one has the ability to be a significant game changer and possibly have more impact than most realize.   Continue reading »