Best practice for configuring a disk library

Last post 12-20-2018, 5:02 PM by Wwong. 7 replies.
Sort Posts: Previous Next
  • Best practice for configuring a disk library
    Posted: 12-19-2018, 1:30 PM

    Hello everyone,

    I'm moving to a new CommVault server and have a question about configuring a disk library.  My old server was configured by someone else and had the disk split up into 4.5TB volumes which were mounted on 15 drive letters (F: through T:).  This left only six more letters (U: through Z:) should we need additional capacity.

    To allow for future expansion I was thinking of setting up the new server with 18TB drives - four times larger than the existing server so that fewer drive letters are used and future expansion would be easier. 

    The only document I could find about best practices is: http://documentation.commvault.com/commvault/v11/article?p=9420.htm which says:

    Make sure that each mount path contains at least 2 GB of free space when it is created, and that the total storage space in the disk library is sufficient to hold the maximum amount of data that will be stored in the library at any time.

    My question is:  Is there an optimal configuration for splitting up a disk library (HP MSA in my case)?  Is there an optimal number of drives for the disk library? 

    Any suggestions appreciated.

    Ken

  • Re: Best practice for configuring a disk library
    Posted: 12-19-2018, 2:00 PM

    Hi Ken,

    In planning how best to break down a magnetic library into X # of mount paths you will want to review and understand what the requirement will be for your data streams. The answer to this usually depends on what work needs to be done to meet the business objectives. There is not really a one fits all answer. 

    See Device Streams here https://documentation.commvault.com/commvault/v11/article?p=10969.htm

     

     

     

  • Re: Best practice for configuring a disk library
    Posted: 12-19-2018, 4:30 PM

    OK, here's my thought process:  I have four tiers of backups:

    • Tier 1 - production critical
    • Tier 2 - Exhange & SharePoint
    • Tier 3 - Other production
    • Tier 4 - Development & test

    Each storage policy has:

    Device Streams=50

    My Media agent has:

    Max parallel Data transfer operations=100

    My current media agent has:

    Mount Path Allocation=Max allowable writers

    I don't know what the value is for "Max allowable writers" so I'll take a guess and say that it is 10. 

    If all four tiers of backups are running, I'm could be allocating 200 device streams.  If I have 15 mount points times 10 writers then the disk writers max out at 150.  Finally, the media agent is set to 100 max parallel data transfer operations so this is my limiting factor.

    On the new hardware, if I drop to 4 mount points, I've effectively dropped the max number of writers to 40 and introduced a bottleneck.  It seems I need to keep at least 10 mount points to keep the number of writers in line with the configuration of the Media Agent.

    This math is dependent on the maximum number of writers per mount path actually being equal to 10.  If it is only 5 then any reduction in the number of mount points will decrease my throughput.  Is there a way to determine how many writers per mount point I am actually getting?

    Also, my backups ar 79% Windows File System, 19% Virtual servers and 2% Exchange.  I don't think this impacts the allocation but thought I'd mention it.

    Ken

  • Re: Best practice for configuring a disk library
    Posted: 12-19-2018, 8:09 PM

    Hi Ken 

    From a CommVault perspective the setting "Maximum Allowed Writers", means that there will be no limit to how many streams will be used when committing writes to the Mount Point. 

    When looking into the maximum amount of writers, we need to understand the amount of open request to the NTFS (assuming these Mount Points are formatted as such), and confirm from a Backend Storage level, how much request the volume/LUN can handle.

    The bottleneck when committing writes to the Mount Path, could be at the Filesystem level or at the Storage level. 

    In most cases the reason why we would create multiple Mount Paths, is so we can distribute the load evenly, and not have a single hot spot on the Backend Storage. (i.e. in most cases backend storage is configured with a Storage pool, made up of multiple RAID Groups and Disk -> a volume will be carved out of this Storage Pool). 

    Note - when considering load performance of Storage, it is also recommendable to use THICK LUNs then THIN (so we do not have overhead in allocating on demand data slices). 

    Hopefully the above provides more relevant information to assist you in configuring your environment.

    Thank you 

    Winston 

  • Re: Best practice for configuring a disk library
    Posted: 12-20-2018, 10:10 AM

    Thanks for the reply.  My plan right now is to split each of the two HP MSA disk storage devices into 5 LUNs each which will then connect to a total of 10 drive letters on the CommVault server.  This looks like a reasonable balance of spreading the IO amoung several operating system queues and still leaving room for future growth should we need additional space in the next 5 years.

    Also, I'll reach out to HP to see if they have any recommendations just to make sure I've covered all my bases.

    I appreciate your help.

    Ken

  • Re: Best practice for configuring a disk library
    Posted: 12-20-2018, 1:20 PM

    Hi Ken,

     

    I would be cautious when splitting drives down to something specific. If your critical runs out of space, are you able to grow that drive from somewhere else? My experience has been to add them as large disk pools. Currently I have them added as 16TB drives on the MA, or in a CIFS connection they are 100TB each. These could be bigger since they are not mounted to the MA. By leaving them larger, you also have more spindles in the group so they are not so segregated.  

     

    For the streams, I would leave them set to the max allowed. If you are running dedupe, each partition was limited to 50 connections, this might be higher with the newer ones created. Assuming this is on a good SSD, let your hardware run and do the work. If not, I would still let things run and use my schedules to break the jobs apart.

  • Re: Best practice for configuring a disk library
    Posted: 12-20-2018, 3:21 PM

    Are you suggesting that the entire MSA disk appliance be presented huge as a single huge disk drive?

  • Re: Best practice for configuring a disk library
    Posted: 12-20-2018, 5:02 PM

    Hi Ken 

    Base on all the suggestion that has been advised, there isn't really a one off Best Practice in provisioning Mount Paths. 

    Different environment, with different hardware will yield different results. 

    The one most important factor is whether the underlying hardware can handle the workload. 

    Different storage vendor will have different methods of creating RAID and Storage Pool from Disk Spindle and depending on this many factors can come into picture. 

    Separating it into 5 LUNs (of x size) and mounting it as a specific Drive will work, because at the end of the day, this is all presented as a single logical library to CommVault, and depending on Spill and fill or Fill and Spill, CommVault will try to distribute the workload across the 5 LUNs. 

    Again creating a multiple 16TB LUNs will also work in the same manner.

    One of the downside of having a large LUN, is if that LUN goes offline (and cannot be brought back online) you are losing 16 TB of deduplicated data (it is the same as losing maybe a 2 TB of deduplicated data) but the impact is different.

    Feel free to reach out directly if you have any other questions

    Thanks 

    Winston

The content of the forums, threads and posts reflects the thoughts and opinions of each author, and does not represent the thoughts, opinions, plans or strategies of Commvault Systems, Inc. ("Commvault") and Commvault undertakes no obligation to update, correct or modify any statements made in this forum. Any and all third party links, statements, comments, or feedback posted to, or otherwise provided by this forum, thread or post are not affiliated with, nor endorsed by, Commvault.
Commvault, Commvault and logo, the “CV” logo, Commvault Systems, Solving Forward, SIM, Singular Information Management, Simpana, Commvault Galaxy, Unified Data Management, QiNetix, Quick Recovery, QR, CommNet, GridStor, Vault Tracker, InnerVault, QuickSnap, QSnap, Recovery Director, CommServe, CommCell, SnapProtect, ROMS, and CommValue, are trademarks or registered trademarks of Commvault Systems, Inc. All other third party brands, products, service names, trademarks, or registered service marks are the property of and used to identify the products or services of their respective owners. All specifications are subject to change without notice.
Close
Copyright © 2019 Commvault | All Rights Reserved. | Legal | Privacy Policy