Vmware restore speeds

Last post 10-12-2020, 12:11 PM by Liam. 9 replies.
Sort Posts: Previous Next
  • Vmware restore speeds
    Posted: 09-17-2020, 12:42 PM

    Hi guys.

     

    What speeds (gb/h) do you see when you do a vmware restore?

     

    I have a setup with HyperScale servers 10/40GB network, vmware datastores on a mix of flash and spindles. I rarely see speeds over 150 Gb/h and I feel that this is slow.

     

    Regards

    -Anders

  • Re: Vmware restore speeds
    Posted: 09-18-2020, 11:32 PM

    Out of curiousity, are you seeing those speeds via HotAdd and NBD?

  • Re: Vmware restore speeds
    Posted: 09-19-2020, 4:51 PM

    We do restores via NBD and SAN.

  • Re: Vmware restore speeds
    Posted: 09-21-2020, 3:46 AM

    Hi Anders

    Are the HyperScale Nodes also the Proxy or do you have separate Proxy?

    On the MediaAgent (in this case the HyperScale Node) there is a CVPerfMgrRestore.log, post running a test restore can you please share that, as that will provide more insight on where the bottleneck is coming from (i.e. Read from HyperScale, Network to Proxy or write speed to the Destination). 

    Regards

    Winston 

  • Re: Vmware restore speeds
    Posted: 09-21-2020, 3:58 AM

    Hi Winston.

     

    The HyperScale servers are also the proxy.

     

    Here is the output from the CVPerfMgrRestore.log, seems the issue is the network?:

     

      Perf-Counter                                                                             Time(seconds)              Size
    |*58*|*Perf*|7604540| ------------------------------------------------------------------------------------------------------------------------------------------------
    |*58*|*Perf*|7604540|
    |*58*|*Perf*|7604540| Media Agent
    |*58*|*Perf*|7604540|  |_Media Agent[ncop-cmvp-hyp31.prod.cmv.ncop.nchosting.dk].................................         -
    |*58*|*Perf*|7604540|  |_control channel Idle....................................................................         6                            [Samples - 7] [Avg - 0.857143]
    |*58*|*Perf*|7604540|  |_Restore Seek............................................................................        24                            [Samples - 1000] [Avg - 0.024000]
    |*58*|*Perf*|7604540|  |_Buffer allocation.......................................................................     33251             1515219242376  [1411.16 GB]  [Samples - 1000] [Avg - 33.251000] [152.78 GBPH]
    |*58*|*Perf*|7604540|  |_Restore READ............................................................................      2133              756195149093  [704.26 GB]  [Samples - 1000] [Avg - 2.133000] [1188.63 GBPH]
    |*58*|*Perf*|7604540|  |_Restore Statistics......................................................................         -
    |*58*|*Perf*|7604540|    |_[TagData [21193326]]..................................................................         -
    |*58*|*Perf*|7604540|    |_[TagDataSize [753769702842]]..........................................................         -
    |*58*|*Perf*|7604540|
    |*58*|*Perf*|7604540| Reader Pipeline Modules[MediaAgent]
    |*58*|*Perf*|7604540|  |_SDT: Receive Data.......................................................................      2257              757124222003  [705.13 GB]  [Samples - 23134608] [Avg - 0.000098] [1124.70 GBPH]
    |*58*|*Perf*|7604540|  |_SDT-Head: CRC32 update..................................................................       973              757124164371  [705.13 GB]  [Samples - 23134607] [Avg - 0.000042] [2608.90 GBPH]
    |*58*|*Perf*|7604540|  |_SDT-Head: Network transfer..............................................................     35606              757121935431  [705.12 GB]  [Samples - 23134539] [Avg - 0.001539] [71.29 GBPH]
    |*58*|*Perf*|7604540|
    |*58*|*Perf*|7604540| Writer Pipeline Modules[Client]
    |*58*|*Perf*|7604540|  |_SDT-Tail: Wait to receive data from source..............................................      1331              757124222235  [705.13 GB]  [Samples - 23134609] [Avg - 0.000058] [1907.18 GBPH]
    |*58*|*Perf*|7604540|  |_SDT-Tail: Decryption....................................................................       118              757124164603  [705.13 GB]  [Samples - 23134608] [Avg - 0.000005] [21512.34 GBPH]
    |*58*|*Perf*|7604540|  |_SDT-Tail: Uncompression.................................................................       716              757151251950  [705.15 GB]  [Samples - 23134608] [Avg - 0.000031] [3545.46 GBPH]
    |*58*|*Perf*|7604540|  |_SDT-Tail: Writer Tasks..................................................................         -
    |*58*|*Perf*|7604540|  |_CVA Read idle time......................................................................     35740
    |*58*|*Perf*|7604540|  |_CVA Read idle time......................................................................     35740
    |*58*|*Perf*|7604540|
    |*58*|*Perf*|7604540| ----------------------------------------------------------------------------------------------------

    Regards

    -Anders

  • Re: Vmware restore speeds
    Posted: 09-24-2020, 4:20 AM

    Hi Anders

    After taking a look at the CVPerfMgrRestore.log the issue looks like the network between the Proxy and the MediaAgent is the bottleneck

    |*58*|*Perf*|7604540|  |_SDT-Head: Network transfer..............................................................     35606              757121935431  [705.12 GB]  [Samples - 23134539] [Avg - 0.001539] [71.29 GBPH]
    |*58*|*Perf*|7604540|
    |*58*|*Perf*|7604540| Writer Pipeline Modules[Client]
    |*58*|*Perf*|7604540| |_SDT-Tail: Wait to receive data from source.............................................. 1331 757124222235 [705.13 GB] [Samples - 23134609] [Avg - 0.000058] [1907.18 GBPH]

    You can see that the slowest component is the Reader transferring over the network to the Write

    Might be worth while to check the network between the two components

    Regards

    Winston 

  • Re: Vmware restore speeds
    Posted: 10-02-2020, 3:47 PM

    I will see restores around 300 GB/h.  This is from a FC disk library via SAN mode transfer over FC.

    With our LiveSync restores I will see even higher speeds, over 700 GB/hr but they are a bit different since it is essentially an incremental restore.



  • Re: Vmware restore speeds
    Posted: 10-09-2020, 3:24 PM

    How'd you get the storage guys to allow you access to be able restore using SAN transport method?  My only option is NBD and the restore speeds are really slow.   B/c the luns are presented to my media agents I do not have SAN write access to the LUNS.  I think it's best practice to not have SAN write access.  I'd love it if my storage guys would give me a special LUN with SAN write access.   We do not have a virtual lab environment either, although I'd sure love to set one up.  

  • Re: Vmware restore speeds
    Posted: 10-09-2020, 5:14 PM

    I am lucky to be on the Backup and Storage team.   In production we have 1 VMware datastore where the media agents and ESX hosts both have access.  We run the restores to it.  Then if they like the VM they can VMotion back to a production datastore.   In our DR site 90% of the datastores are visible to the media agents since that is what they do every day.

     Good luck!

  • Re: Vmware restore speeds
    Posted: 10-12-2020, 12:11 PM
    • Aplynx is not online. Last active: 10-12-2020, 12:13 PM Liam
    • Top 10 Contributor
    • Joined on 05-04-2010
    • New Jersey
    • Master
    • Points 1,884

    The LUN is mapped as a volume in the operating system and if set to read only, the restore will fail. However if the LUN is not read only anyone logging into the server can format the volume and or navigate and delete. CommVault recommends leaving the SAN as 'read only' unless a restore is being performed or to designate a LUN with read\write access and direct any restores there. Then the machine can be vMotioned to another location as needed.

The content of the forums, threads and posts reflects the thoughts and opinions of each author, and does not represent the thoughts, opinions, plans or strategies of Commvault Systems, Inc. ("Commvault") and Commvault undertakes no obligation to update, correct or modify any statements made in this forum. Any and all third party links, statements, comments, or feedback posted to, or otherwise provided by this forum, thread or post are not affiliated with, nor endorsed by, Commvault.
Commvault, Commvault and logo, the “CV” logo, Commvault Systems, Solving Forward, SIM, Singular Information Management, Simpana, Commvault Galaxy, Unified Data Management, QiNetix, Quick Recovery, QR, CommNet, GridStor, Vault Tracker, InnerVault, QuickSnap, QSnap, Recovery Director, CommServe, CommCell, SnapProtect, ROMS, and CommValue, are trademarks or registered trademarks of Commvault Systems, Inc. All other third party brands, products, service names, trademarks, or registered service marks are the property of and used to identify the products or services of their respective owners. All specifications are subject to change without notice.
Close
Copyright © 2020 Commvault | All Rights Reserved. | Legal | Privacy Policy