Hello,
What's behind good practice saying block level deduplication factor for cloud libs should be set to 512K? Can we use the same size to local storage disk librarys?
I have a case, whre backups need to be migrated from claud librarys to local stroage array. Using default block size will reqire 4 DDB partition (currently it has 600M uniq blicks, <700M its a good practice right?).
Thinking about leave 512K block size and add another DDB partiotion so it hase room for grow for some time. I know about good practice, saying deduplication extended mode backend limit is 500TB, but if performance and deduplication level is acceptable its wort to try.
Any thoughts?
Thanks.