vSphere 5.5 Storage Policies

By | March 27, 2016

A big push of VMware in the past couple of years has been policy based EVERYTHING. With the move towards automating every aspect of the implementation process, from storage to compute to networking, it seems that object metadata (tags) and policy associations are going to continue to play a bigger role in every tool set that VMware pushes.

In vSphere 5.5 VMware introduced “Storage Policies”. The concept is very similar to the earlier Storage Profiles available in the fat client, however the new implementation is based on vCenter tags, is somewhat more extensible, and is ONLY available via the web client.

In a nutshell, Storage Policies allow an administrator to granularly define capabilities that can be associated with datastores. These can either be vendor supplied capabilities via VASA, or, you guessed it – vCenter tags created by an administrator. These Storage Policies can then be consumed and assigned to VM workloads, guiding placement as well as ongoing enforcement via reporting mechanisms and vCenter alarms.

This post is going to walk through the use of Storage Policies as they can be applied to user created Tags. Rather than simply be a post about Storage Policies and how they can be created, we will first spend some time describing a common DR scenario that is a driver for the use of Storage Policies.

The specific use case surrounds a real life need to classify groups of LUNs for array based replication sets.

We will start with a quick rundown of a common array’s (3PAR) replication technology, an existing array based DR design, and how the need for Storage Policies comes into play. This will lay down a real life scenario upon which we can then describe Storage Policy creation in depth, and how they can be applied to this design.

3PAR CONSISTENCY GROUPS – DR DESIGN

At my day job, we are currently relying on array based replication via 3PAR Remote Copy technology to keep all of our customers protected via DR. This is simply a hardware based snapshot performed on the array – no application level quiescence is called prior to this LUN level snapshot, so we are literally getting what I like to refer to as the “bullet in the head” crash-consistent copy for replication.

The way the CG object works in 3PAR is pretty straight forward, but the main takeaways are:

  1. The 3PAR P8400 has a limitation of up to 300 LUNs in a consistency group (CG).
  2. All member LUNs included in a CG are snapped at the same time.
  3. There is no way to schedule the CG snaps for a specific point in time – a Storage Admin can only specify the following:
    1. CG member LUNs
    2. Time interval for snaps (i.e. every 15 min, 60, 120, etc.)
  4. All members LUNs in a CG need to complete the replication of all blocks before the entire CG set gets committed at the DR destination. So if we create a very large CG, say with 250 LUNs, the timely commit of all changed blocks for each member LUN will be dependent on the last LUN to complete.

This means that in order to retain write order fidelity of a tenant’s VM’s (all get snapped at same time), all of these machines need to be located on a set of LUNs that are members of the same backend 3PAR CG.

This is a hard requirement of this DR design – all of a customer’s machines must be placed within the same 3PAR Consistency Group (CG) to ensure write order fidelity across all application tiers that make up their specific environment.

In order to track and enforce this workload placement requirement, we have decided to create a one-to-one mapping of vSphere SDRS clusters to 3PAR CGs. We also decided to limit the size of our CGs to an initial set of 30 LUNs, in order to mitigate the potential noisy neighbor factor of a single LUN with many changed blocks holding up too many other LUNs from being committed in a timely fashion.

This mapping can best be described via the below graphic:

COMPUTE CLUSTER / SDRS CLUSTER – MAPPING

We initially toyed with the idea of all LUNs (all SDRSs/CGs) exported to all compute clusters. The driver of this was the goal of having workload mobility across all compute clusters, without having to svmotion the VMDKs between SDRSs/CGs. Re-replicating blocks due to LUN relocation is something we want to avoid at all costs J

After vetting this design with VMware and a couple VCDXs, we backed away on this idea due to it not aligning with the best practice of tight segregation between compute clusters and the storage presentation. (Both VCDXs urged us to keep a LUN set exported to a single compute cluster ONLY.)

We ended up compromising with the design of moving to wider compute clusters (went from 8 to 16), and exporting the SDRS/CG LUN sets on a per compute cluster basis. This design mapping is best described by the graphic below:

A pretty straight forward design, however the question remained of how we were going to control both the initial placement as well as the ongoing enforcement of tenant machine affinity to a singular SDRS/CG group for crash consistency.

This is where Storage Policies comes into the picture!

STORAGE POLICIES

In order to enforce the placement into the design mappings described above, we are creating vSphere 5.5 Storage Policies which are aligned to LUNs identifying which backend 3PAR CG they are aligned with. The high level process for creating Storage Policies is as follows.

  1. Create a vSphere Tag Category which will represent Consistency Group membership.
  2. Create vSphere Tags for each of the 3PAR Consistency Groups.
  3. Create Storage Policies for each individual CG, associating appropriate Tags from step 2.
  4. Associate Storage Policy to appropriate LUN via Tag assignment (remember SDRS clusters are set up to map to CGs, so this means applying the Storage Policy to all LUNs within an SDRS cluster).
  5. Retroactively apply appropriate Storage Policies to all SDRS cluster member VM’s.

We will look at each of these steps below.

1. VSPHERE TAG CATEGORIES

First navigate to “Tags” in the vSphere Web Client.

We are going to start by creating a new Category of Tag, called “3PAR Consistency Group”. Select “Categories” and then the icon for creating a new category.

In the “New Category” wizard, enter a name for the Category. Also select a cardinality setting of “One tag per object”, since a LUN can only ever be a member of a single backend 3PAR CG. Finally select the type of objects that this category can be associated with, in our case we are going to want to apply this category of tags to both Datastore objects as well as Datastore Cluster objects. Hit OK to create the Category.

2. VSPHERE TAG CREATION

Now that we have our category created, let’s move on to defining the individual Tags that will map to each of the SDRS/CG clusters.

From the “Tag” landing page in the vSphere Web Client, select “Tags” and then the icon for a new Tag.

In the “New Tag” wizard, add the Tag Name, optional Description, as well as select the Category that we created in the above step. Hit OK to create the Tag.

Simple enough! I went ahead and created the other Tags as well that are needed for the four CGs that are defining a POD in our current design.

3. STORAGE POLICY CREATION

Now that we have our Tags created, we need to associate them with a Storage Policy.

From the home page of the Web Client, navigate to “VM Storage Policies”.

Click the icon to create a new Storage Policy.

Enter a name for the Storage Policy.

Add a “Rule Set” for this Storage Policy. A Rule-Set contains at least one, or perhaps many Rules. A Rule can reference any VASA supplied attributes, or user supplied attributes (Tag) which is what we are going to be utilizing.

Pay close attention to the verbage surrounding Rule-Sets vs Rules, and how the combinations of these act as a logical and/or for satisfying Storage Policy requirements.

This IMHO is less than clear, but essentially boils down to the following:

  1. Multiple Tags in a single Rule – a DataStore matching ANY of these Tags will satisfy the Rule
  2. Multiple Rules in a single Rule-Set – a DataStore must satisfy ALL Rules to satisfy the Rule-Set
  3. Multiple Rule-Sets in a single Storage Policy – a Datastore satisfying ANY of the Rule-Sets will meet the requirements of the Storage Policy

Clear as mud? J

It might take some playing around with these combinations to really wrap your head around it, but by combining Tags, Rules and Rule-Sets, the Storage Policy requirements can be made very granular according to business use case.

Select the specific Tag for the 3PAR CG that we wish to link to. In the example below, we have matched the Tag name with Storage Policy name for ease of management and association between the objects, but any naming convention can work as long as it is clear for your use case.

Our needs for this Storage Policy don’t require additional Tags, Rules, or Rule Sets, so select “next” to progress in the wizard.

We haven’t associated any DataStores yet with our Tag, so the next screen will show zero matching DataStores.

Proceed to complete the wizard.

Go ahead and add the additional Storage Policies for the other 3PAR CG Tags, following the same set of steps above.

4. TAG ASSIGNMENT TO DATASTORES

Next we need to assign the Tags to the appropriate Datastores so that they can be associated with the appropriate Storage Policy. Doing this is the UI is painful for retroactively applying to many objects, so we are going to get these applied via PowerCLI.

PowerCLI 5.8 includes some nice commandlets specific to Storage Based Policy Management, so this will be the minimal version of PowerCLI required.

Per the layout described earlier, we have for simplicity in the design mapped our SDRS clusters back to specific CGs on the 3PAR array. The names of the SDRS cluster match the names of the Tags which in turn match the names of the associated Storage Policies. Simple right? J

I am going to execute the following PowerShell one-liner to get the Storage Policies correctly applied to my first SDRS cluster, via bulk tag assignment.

“Get-DatastoreCluster -name SBO3-PA03-Clus01-CG01 | Get-Datastore | New-TagAssignment -Tag SBO3-PA03-Clus01-CG01”

Works great! Quick double check in the UI to ensure the Tag assignment is showing up.

I repeat the same steps for my other three SDRS clusters, assigning the appropriate tag via the same method.

5. ASSIGN STORAGE POLICES TO GUEST VIRTUAL MACHINES

Finally, where the rubber meets the road…

We have created a tag category and tags representing each 3PAR CG/SDRS Clusters, associated those tags to respective Storage Polices, and assigned the tags to the appropriate DataStores. It is now time to assign the Storage Policies themselves to the actual Guest VM’s that need to have their placement tracked and policed.

Again we are going to leverage the SPBM commandlets in PowerCLI 5.8 for this task. All existing workloads are already placed in the correct SDRS clusters, so we don’t have to svmotion anything around to correct VMDK placement. I ran the following one liner to assign the Storage Policy to my Cluster 1 VM guests.

“Get-DatastoreCluster -name SBO3-PA03-Clus01-CG01 | Get-Vm | Set-SpbmEntityConfiguration -storagepolicy SBO3-PA03-Clus01-CG01”

Quick spot check on a few VM’s shows that the Storage Policy is correctly applied, and that the VM is compliant!

CONCLUSION

By combining both user created tags (as well as VASA provided capabilities) into varying combinations of Rules and Rule-Sets, we are able to get very creative in enforcing storage placement via policy.

Need a VM to be on a specific set of LUNs that are mapped to a backend Consistency Group? No problem. Maybe in addition we have a subset of these workloads that also require placement on dedicated SSD LUNs? Simply create a copy of the Storage Policy above to a new one, and add a second Rule (either tag or VASA) which requires SSD. Only LUNs with tagged with both our CG tag created above, as well as those marked via VASA (or another tag) as SSD, will satisfy the new Storage Policy requirement. These VM workloads can now be assigned this second Storage Policy, and can be placed/policed accordingly.

In a future post we will re-do the Storage Policies hopefully leveraging VASA, but I wanted to specifically address the user created tag feature of Storage Policies for this post to highlight how extensible this tool can be.

One thought on “vSphere 5.5 Storage Policies

  1. Sandra Braga

    This is way over my head

Comments are closed.