cvdiskperf.exe performance stats

Last post 03-22-2017, 5:10 PM by seabird. 2 replies.
Sort Posts: Previous Next
  • cvdiskperf.exe performance stats
    Posted: 03-20-2017, 3:28 PM

    Would anyone like to share the performance of their attached JBOD storage?  I have been in meetings with customers and they are hesitant about using JBOD storage.  Many are using expensive NAS solutions when they could save some money buy going with JBOD.

    I looked around the web for performance numbers and could not find any.  Let's publish some numbers here from users that have good backup SLAs with JBOD disk.  I think others would find this useful.

     

    Thank You

  • Re: cvdiskperf.exe performance stats
    Posted: 03-20-2017, 3:37 PM

    I had not tried a NAS connection for disk storage, but storage on my EqualLogic iSCSI seemed fine for disk based backups but was terrible when making an aux copy to tape.  

     

    Depending on whether it is writing new blocks or stuff it has already identified as present in the de-dupe store the speed will vary greatly.  Different CommVault agents will also behave differently I would expect.  

     

  • Re: cvdiskperf.exe performance stats
    Posted: 03-22-2017, 5:10 PM

    I'll throw some stats out and if the moderators want to yank them, so be it. Basically we have an Isilon on one MA and a couple HPE D3700 arrays on another (equally powered servers). The D3700s are loaded up with 900GB 10K drives hooked to a P441 controller. Nothing fancy. 6 logical R5 arrays seen by the MA. To be clear, the one with DAS is running Windows , the other Linux :

    DAS:

    Creating Report CvDiskPerf.txt ...
     Creating folder F:\Program Files\CommVault\DiskLib01\DISK_READ_TEST
     Creating files..
     Writing files..
     Reading files..
     Deleting files..
    DiskPerf Version        : 2.1
    Path Used               : F:\Program Files\CommVault\DiskLib01
    Performance type        : Create new
    Read-Write type         : RANDOM
    Block Size              : 65536
    Block Count             : 100000
    File Count              : 2
    Thread Count            : 4
    Total Bytes Read        : 13107200000
    Total Bytes Written     : 13107200000
    Total Bytes Deleted     : 13107200000
    ----
    Time Taken to Create(S)     : 36.54
    Time Taken to Write&flush(S): 36.43
    Time Taken to Read(S)       : 220.91
    Time Taken to Delete(S)     : 0.50
    ----
    Per thread Throughput Create(GB/H)     : 300.70
    Per thread Throughput Write(GB/H)      : 301.54
    Per thread Throughput Read(GB/H)       : 49.73
    Per thread Throughput Delete(GB/H)     : 21927.18
    ----
    Throughput Create(GB/H)     : 1202.79
    Throughput Write(GB/H)      : 1206.16
    Throughput Read(GB/H)       : 198.93
    Throughput Delete(GB/H)     : 87708.71
    Stat Time(S)            : 0.10

    The NL400 has about 3x the spinning disks (no flash) and a similar test down one path:

    DiskPerf 2.1
    Creating Report CvDiskPerf.txt ...
     Creating folder /nas/CommVaultDiskLib/mp02/DISK_READ_TEST
     Creating files..
     Writing files..
     Reading files..
     Deleting files..
    DiskPerf Version        : 2.1
    Path Used               : /nas/CommVaultDiskLib/mp02
    Performance type        : Create new
    Read-Write type         : RANDOM
    Block Size              : 65536
    Block Count             : 100000
    File Count              : 2
    Thread Count            : 4
    Total Bytes Read        : 13107200000
    Total Bytes Written     : 13107200000
    Total Bytes Deleted     : 13107200000
    ----
    Time Taken to Create(S)     : 24.93
    Time Taken to Write&flush(S): 15.92
    Time Taken to Read(S)       : 30.64
    Time Taken to Delete(S)     : 0.50
    ----
    Per thread Throughput Create(GB/H)     : 440.66
    Per thread Throughput Write(GB/H)      : 689.93
    Per thread Throughput Read(GB/H)       : 358.53
    Per thread Throughput Delete(GB/H)     : 21941.85
    ----
    Throughput Create(GB/H)     : 1762.64
    Throughput Write(GB/H)      : 2759.72
    Throughput Read(GB/H)       : 1434.13
    Throughput Delete(GB/H)     : 87767.40

     

    So, sure, the fancy NAS appears faster, but the DAS-based MA is no slouch either. Backs up 700+ VMs daily amongst some NDMP and physical boxes as well. Synth fulls are plenty fast on both. The cost difference between the 2 I think most people can guess. Each have their places. I can provide more info if folks want, but I agree, if people can post some of their equivilent tests and configs it would help us all out. Personally, configuring HP arrays is a snap (as is ongoing maintenance). Far less complexity in my mind.

The content of the forums, threads and posts reflects the thoughts and opinions of each author, and does not represent the thoughts, opinions, plans or strategies of Commvault Systems, Inc. ("Commvault") and Commvault undertakes no obligation to update, correct or modify any statements made in this forum. Any and all third party links, statements, comments, or feedback posted to, or otherwise provided by this forum, thread or post are not affiliated with, nor endorsed by, Commvault.
Commvault, Commvault and logo, the “CV” logo, Commvault Systems, Solving Forward, SIM, Singular Information Management, Simpana, Commvault Galaxy, Unified Data Management, QiNetix, Quick Recovery, QR, CommNet, GridStor, Vault Tracker, InnerVault, QuickSnap, QSnap, Recovery Director, CommServe, CommCell, SnapProtect, ROMS, and CommValue, are trademarks or registered trademarks of Commvault Systems, Inc. All other third party brands, products, service names, trademarks, or registered service marks are the property of and used to identify the products or services of their respective owners. All specifications are subject to change without notice.
Close
Copyright © 2017 Commvault | All Rights Reserved. | Legal | Privacy Policy