increase block level deduplication factor

Last post 04-21-2020, 2:46 AM by lukasz.borek. 4 replies.
Sort Posts: Previous Next
  • increase block level deduplication factor
    Posted: 03-27-2020, 9:10 AM


    What's behind good practice saying block level deduplication factor for cloud libs should be set to 512K? Can we use the same size to local storage disk librarys?


    I have a case, whre backups need to be migrated from claud librarys to local stroage array. Using default block size will reqire 4 DDB partition (currently it has 600M uniq blicks, <700M its a good practice right?).


    Thinking about leave 512K block size and add another DDB partiotion so it hase room for grow for some time. I know about good practice, saying deduplication extended mode backend limit is 500TB, but if performance and deduplication level is acceptable its wort to try.  


    Any thoughts?


  • Re: increase block level deduplication factor
    Posted: 03-31-2020, 9:26 PM

    Hi Lukasz

    The best practice in configuring block size for DDB associated to the following libraries:

    • Local Storage - 128k
    • Cloud Library - 512K
    Are we planning to move all the Backups from Cloud back to Disk or just a few backups?
    I would just extend the existing DDB and retain 128K for DDB going to the local Disk Library 
  • Re: increase block level deduplication factor
    Posted: 04-17-2020, 1:25 PM

    Problem with amount of data that we plan to protect by this instalation. Around 1PB and keep growing. With standard block size 250GB backed per partition it wont feat in one DDB (eaven with 4 partitions).


    Main question: what's behind this recomendation. What's the different betwean cloud library vs local library in regards to deduplication block size. Backup performance and deduplication level is not main value for us in that case. 

  • Re: increase block level deduplication factor
    Posted: 04-20-2020, 1:17 AM

    Hi lukasz

    When you refer to 1PB of data to protect is this the Front-end or the Backend size 

    Technically the recommended guidelines on BOL ( is to provide you in a real world scenario what is the recommended sizing of the DDB and data stored on Disk. 

    If there are still available capacity for the backend storage, I don't see any issue with expanding the deduplication extended mode to 4 way partition.

    The reason why 512kb is set for Cloud Library is because Commvault writes in a different format when writing to Cloud and by using larger Block size it will mitigate the amount of put and get request during Backup and restore. 



  • Re: increase block level deduplication factor
    Posted: 04-21-2020, 2:46 AM

    Thanks. Make sense. 

The content of the forums, threads and posts reflects the thoughts and opinions of each author, and does not represent the thoughts, opinions, plans or strategies of Commvault Systems, Inc. ("Commvault") and Commvault undertakes no obligation to update, correct or modify any statements made in this forum. Any and all third party links, statements, comments, or feedback posted to, or otherwise provided by this forum, thread or post are not affiliated with, nor endorsed by, Commvault.
Commvault, Commvault and logo, the “CV” logo, Commvault Systems, Solving Forward, SIM, Singular Information Management, Simpana, Commvault Galaxy, Unified Data Management, QiNetix, Quick Recovery, QR, CommNet, GridStor, Vault Tracker, InnerVault, QuickSnap, QSnap, Recovery Director, CommServe, CommCell, SnapProtect, ROMS, and CommValue, are trademarks or registered trademarks of Commvault Systems, Inc. All other third party brands, products, service names, trademarks, or registered service marks are the property of and used to identify the products or services of their respective owners. All specifications are subject to change without notice.
Copyright © 2021 Commvault | All Rights Reserved. | Legal | Privacy Policy