When a chunk is impacted due to corruption or other storage related issue, when we run a Data Verification from Commvault, the purpose is to identify all the associated jobs to that chunk and mark those jobs as bad.
Ok so the purpose of marking jobs as bad, is so the next Backup it will enjoy try to reference corrupted chunk and write new baseline references. The purpose is to future proof your latest backup without it referencing corrupted data.
However that does not ensure that the new data that is written down, will help resolve old backup that is referenced to the old chunk. Note - each job has associated chunks and archive file, new archive file won't be necessarily helpful in resolving your corruption issue.
The recommendation for all corruption on Storage level, the root cause needs to be addressed to ensure the integrity of the Backup data.
I.e. corruption could be due to application, OS, Network or Storage, so as an additional recommendation it is also good to have secondary copy, so you will have two set of backup rather then just one.
Hopefully that helps