If the primary copy is deduplicated, Disk Read will most likely be more benefical.
This is due to the way we read the data.
Disk Read Optimized: Read latest chunk meta data and send the signatures to the destination DDB. If the signature is new it reads the information from the chunk data on disk(similar to network read but only when needed).
Network Read Optimized: Chunk Data is read from the disk and new signatures are generated in memory(UseCacheDB 0, 1 would put it on disk which is normally slower as well) to be sent to the destination DDB. As we need to read potentially a larger amount of data. This option will technically verify all the data on your source as it reads it as a whole.
you mentioned seeding the data, there is a document on this which you can review here: http://docs.commvault.com/commvault/v10/article?p=features/deduplication/manualseedingprocess.htm
lastly, 2 media agents is prefered and I will explain why.
MA 1 holds the primary DB and Disk Library paths
MA 2 holds the secondary DB and disk library.
The data is dashed copy and MA1 fails like a DR or hardware failure scenario, how do you restore the data?
MA2 will be avaialble to perform the restores, if you were using just one media agent you would be stuck until new hardware was available.
Keep in mind the Commserve still needs to be available so if you are not using a standby CS and the primary site goes down, you will need to restore the DB.
Here is an additional document on setting up log shipping: