Probably not, but who knows.
We have a customer who used a DataDomain in the past for his Primary copy.
To have full advantage of the DD they disabled compression and did not make use of deduplication.
Now they want to phase out these DataDomain and created auxiliary copies to move their data to a “simple” library
One of these copies is a Selective year copy without deduplication. Our customer noticed that the data written is the same size as the application data and concludes there is no compression.
I believe this is normal standard behavior. If you do not use compression in your primary copy you cannot enable it in an auxiliary copy.
The only way I can think to make more efficient use of the storage is to use deduplication. As it is just for a year copy I cannot predict how efficient this will be. I think the only way to know is configure it.
Has anyone another idea? Maybe an additional setting? And why should not able it in another copy. What is the reason behind this.