When performing a Dash Copy with a mismatched block size (source is 128KB and destination is 512KB) we recommend using network optimized not disk optimized. With network optimized we read the data and generate a signature based on the destination DDB block size.
With disk optimized we read the signature from the chunk_metadata based on the source block size 128KB and check the destination DDB where the block size is 512KB. We will not find the signature in the destination DDB because the block sizes are mismatached. We then need to read the data and generate a signature back on the destination DDB block size.
You can use disk optimized but only if you set the block size needs to be the same on the source and destination DDB. When the block size is the same and we use disk read optimized we can read the signature from the chunk_metadata and do not need to read/rehydate the data.
You can setup the source and destination to use the same block size and use disk optimized.
If your source library uses a 512KB block size, there will be less space savings on your source library.
If your destination library uses a 128KB block size, pruning and read performance will suffer.