EMC Symmetrix VMAX 40K testing on vSphere 5.0

We are in the process of migrating from HDS to EMC storage and I have been testing our Symmetrix VMAX 40K on vSphere 5.0. This has been an interesting journey and has highlighted that although similar concepts (ie. block storage with FC connectivity), storage arrays differ and need careful implementation if you want to get the best performance from your infrastructure.

This post will cover my testing with this specific storage array and hopefully prompt some feedback on other implementations. Perhaps it will help identify any obvious areas that I have missed and need to address? Either way, some feedback would be awesome.

In terms of storage presentation to the hosts (HP DL380p Gen 8 servers), I used 2x single port QLE2560 HBA’s, each connected at 4GB to Brocade FC switches with 2x paths to each LUN (4 in total). The LUNs were configured as striped META’s, each 2TB in size.

For my performance tests, I ran a series of iometer access specifications and graphed the results in Excel for an easy side by side comparison.

Iometer Access Specifications;

Access Specification Transfer SizeRead / WriteRandom / SequentialAligned on
Max Throughput-100%Read32K100% / 0%0% / 100%32K
RealLife-60%Rand-65%Read8K65% / 35%60% / 40%8K
Max Throughput-50%Read32K50% / 50%0% / 100%32K
Random-8k-70%Read8K70% / 30%100% / 0%8K

All tests were run in 30 second intervals, increasing the number of outstanding IO requests, using exponential stepping to the power of 2, up to a maximum queue depth of 512 outstanding IOs.

The workload was initially placed on a single ESXi host with a single worker thread (to get a baseline), and then scaled out to multiple ESXi hosts with multiple worker threads. The guest VM’s were not optimized in any way and had a single LSI Logic SAS controller (Windows 2008 R2 Standard Edition).

For my first baseline, I used standard NMP with all the defaults;

# START #

~ # esxcli storage nmp device list -d naa.60000970000295700663533030383446
naa.60000970000295700663533030383446
   Device Display Name: EMC Fibre Channel Disk (naa.60000970000295700663533030383446)
   Storage Array Type: VMW_SATP_SYMM
   Storage Array Type Device Config: SATP VMW_SATP_SYMM does not support device configuration.
   Path Selection Policy: VMW_PSP_RR
   Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=3: NumIOsPending=0,numBytesPending=0}
   Path Selection Policy Device Custom Config:
   Working Paths: vmhba3:C0:T3:L1, vmhba3:C0:T4:L1, vmhba2:C0:T4:L1, vmhba2:C0:T3:L1

~ # esxcli storage nmp psp roundrobin deviceconfig get -d naa.60000970000295700663533030383446
   Byte Limit: 10485760
   Device: naa.60000970000295700663533030383446
   IOOperation Limit: 1000
   Limit Type: Default
   Use Active Unoptimized Paths: false

# END #

Here are the results;

NMP Results (policy=rr,iops=1000);

EMC_SYMM_VMAX_40K_NMP_RR_IOPS_1000

For my next test, I changed the IO operations limit from the default 1000 to 1, as recommended in the EMC document (see pages 82-83);

# START #

~ # esxcli storage nmp satp rule add -s "VMW_SATP_SYMM" -V "EMC" -M "SYMMETRIX" -P "VMW_PSP_RR" -O "iops=1"

~ # esxcli storage nmp satp rule list -s VMW_SATP_SYMM
Name           Device  Vendor  Model      Driver  Transport  Options  Rule Group  Claim Options  Default PSP  PSP Options  Description
-------------  ------  ------  ---------  ------  ---------  -------  ----------  -------------  -----------  -----------  -------------
VMW_SATP_SYMM          EMC     SYMMETRIX                              user                       VMW_PSP_RR   iops=1
VMW_SATP_SYMM          EMC     SYMMETRIX                              system                                               EMC Symmetrix

# END #

I rebooted my hosts at this point, and confirmed that the device had been claimed correctly;

# START #

~ # esxcli storage nmp device list 
naa.60000970000295700663533030383446
   Device Display Name: EMC Fibre Channel Disk (naa.60000970000295700663533030383446)
   Storage Array Type: VMW_SATP_SYMM
   Storage Array Type Device Config: SATP VMW_SATP_SYMM does not support device configuration.
   Path Selection Policy: VMW_PSP_RR
   Path Selection Policy Device Config: {policy=rr,iops=1,bytes=10485760,useANO=0;lastPathIndex=1: NumIOsPending=0,numBytesPending=0}
   Path Selection Policy Device Custom Config:
   Working Paths: vmhba2:C0:T3:L1, vmhba3:C0:T1:L1, vmhba2:C0:T1:L1, vmhba3:C0:T3:L1

# END #

Here are the new performance results using the {policy=rr,iops=1} device configuration.

NMP Results (policy=rr,iops=1);

EMC_SYMM_VMAX_40K_NMP_RR_IOPS_1

WOW, that single change increased performance by a factor of 4X using a single worker thread! Throughput increased from just over 300 MB/s to 800 MB/s, operations increased from 10,000 IOPS to 40,000 IOPS and average latency dropped from 70ms to 25ms at a MAX queue depth of 512.

The results were similar in the scale out tests, with the same observation that guest CPU utilisation increased with the additional workload that it was able to process.

I then implemented EMC PowerPath/VE to see how this compared to NMP in the identical iometer tests. My assumption was that PowerPath/VE would far out perform NMP and the cost would be easily justifiable by the performance gains.

Interestingly, the results were similar to NMP with the IO operations set to 1 which has made it hard to sell to management. I understand the benefits that it offers over NMP, but perhaps these will only become apparent when we ramp up the workload and need the extra smarts behind the workload.

PowerPath/VE Results;

EMC_SYMM_VMAX_40K_MPP_POWERPATH_VE

These are obviously very simple tests, but it’s incredible how much performance can change by simply reading vendor recommendations and testing this in your own environment.

 46,046 total views,  1 views today

Author: Jon Munday

An independent IT contractor with a strong focus on VMware virtualisation and infrastructure operations. I am inspired by technology, not afraid to question the status quo and balance my professional commitments with entertaining my three awesome kids (Ashton, Oliver and Lara).

5 thoughts on “EMC Symmetrix VMAX 40K testing on vSphere 5.0”

    1. Hi Ryan,

      These results are using 15K FC disks with no FAST policy enabled. We have 660 disks in total, 66 are EFD, 520 FC, and 74 SATA. The META’s are striped as apposed to concatenated.

      I did run identical tests on these tiers as well, so can post some results if you would like to see them. From memory, the SATA performance was awful, and the EFD results not too dissimilar to the above FC results (throughput and IOPS). The maximum latency however was much lower with the EFD tier.

      Based on the similar results in throughput and IOPS with FC and EFD, I assume that the bottleneck is the HBA’s and 4GB ports and I could probably squeeze much more out of the disks if I have 8GB or 16GB ports.

      Cheers,
      Jon

  1. Excellent info Jon. You’ve pretty much confirmed my thoughts and saved me all the tests I was thinking of doing. I’d love to see the tier test results though. Greatly appreciated.
    If you ran the tests in succession, would the later test numbers be skewed as the data might be residence in cache assuming the same data is used. I’m not familiar with IOMeter, hence the question. Thank you.

    1. Hi Ken,

      When you say “tier test results” are you referring to the disk tiers (ie. EFD, FC and SATA)? I do have the results for each of these tiers and can send them onto you.

      From memory I ruled SATA out as simply not good enough, and interestingly, there was little difference in IOPS and throughput between EFD and FC. The maximum latency however, was as expected lower with the EFD disks. I suspect that I was hitting limitations of the HBA’s and 4Gb FC ports rather than stressing the array’s capabilities.

      The results (covering hundreds of runs over several weeks) were too consistent to be coming directly from cache on successive runs. I did however see this behaviour on our legacy HDS array where the 2nd and 3rd runs significantly out performed the initial run. Having seen this before, I made sure that my tests volumes were 100GB (ie. larger than the available cache) and tested across multiple hosts and VM’s.

      Performance has changed significantly for us now that the array is fully loaded, so it’s going to be interesting to re-test this over the coming weeks. Perhaps the business case for PowerPath/VE only stacks up on a heavily utilised array where you need the extra smarts?

      Cheers,
      Jon

  2. Jon,
    Yes I was referring to the disk tiers tests you mentioned. Please send them thank you.
    Your numbers are definitely valid then with the way the tests were ran. We’re also an ex HDS shop, hence the question as we’ve also seen how powerful its caching can be.
    8Gb FC didn’t improve much for us as the bolttlenecks are usually elswehere. The only test that would benefit would be the Max read throughput test. With dual 8Gb HBAs on the host, we were still only seeing 800Mbps max and similar numbers as with 4Gb HBAs. 4 x 4Gb will likely buy your more performance if needed. The more paths the better basically.
    Our Powerpath vs. native MPIO (Win2012) tests (SQLIO) showed similar results to yours also. Powerpath actually performed much worse for us cementing our decision to use native MPIO and VMWare MP instead. It might buy you better resiliency maybe. We’ve been fine with the native stuff so far.
    Would love to see your new test results when they’re available. Thanks again.
    Cheers,
    Ken

Leave a Reply

Your email address will not be published. Required fields are marked *