Yes, I've observed this behavior, with the following caveat:
It's been several years so my recollection may be slightly different - I don't recall specifically whether it removed the files outright, or just stripped off the reparse point data that made it a valid stub, leaving a corrupted file in its place (on all replica members). It was a Bad Thing, regardless, and involved restoring from a bunch of different replica members based on which one had performed the most recent backup. Fun times!
The only way I can see your proposal working, is some way of having DFSR exclude stubbed files entirely from replication, and/or telling CV not to stub files within folders being DFS-Replicated. However, based on my understanding of DFSR, it does not currently provide this functionality. (Revisiting the DFS-R documentation just now has confirmed this - one can filter files based on name/folder or extension, but not attributes or reparse point existence.)
The way we have handled this is to have specific folders/shares for which replication is necessary (software distribution shares, for example) placed in a subclient for which stubbing/archiving is not enabled. In fact, since these folders are typically replicated from a central point, we actually do this only on one primary core server, and on the branch servers we just exclude the software distribution folders entirely from commvault.
The corollary of this is that if you are replicating a folder from server A to server B (or worse, to B and C and D and...), and stubbing it everywhere, different files will get stubbed at different times on different servers and the damage will increase exponentially based on the number of replica members... it gets really bad, really fast.