Some more thoughts from my side. The documentation seems to recommend the most cost effective is S3 IA only. If we have to reseal the DDB every 6 months, wouldn't a new set of FULL/VSF blocks need to be sent up to IA/Glacier every 6 months when new DDB is created, hence not saving you much money.
My scenario, I’m already on S3 IA storage but I'm looking to create a new library with S3 IA / Glacier to save on cost. Some of the aux copied data only lives on the S3 IA storage currently. So I'll have to setup a new AUX copy from this source S3 IA library to the new library that has the combined storage class S3IA/Glacier. The data will be very infrequently accessed, and only used for extended retention purposes.
If my AWS team already created the new bucket do I need to have them scratch this and let Commvault create the bucket? Or will the Commvault API know how to use this bucket and send the data to Glacier.
I'm just trying to see if it'll ultimately save us on cost or not.
I'm no expert in this AWS storage and I appreciate all the additional conversations.