Yeah, I had a couple of sites with a similar latency and was seeing aux at about 250-250GB/hr as well.
I didn't get CV to confirm this but I beleive that the fact that you were using all your 100mbit pipe before was kind of a coincidence as it equates to roughly 250GB/Hr. In my testing, I found that this limit is mostly caused by a 12ms latency between MAs (at least in my environments). One of my sites actually had a 6gbit/s link and auxes were not really faster.
*************** DISCLAIMER ***************
Now, I did manage to find some sort of workaround which I think is pretty good but you will definitely not find it on any best practices / white papers on Commvault's website. I'll give you the steps but you should probably contact Commvault about it (I would ask your sales rep to put you in contact with an engineer as support obviously won't help you here). You can also do it by yourself like I did but I make no guarantee that this will work in your environment. Also, as I did not validate any of this with Commvault, anything that appears as being a statement below is really just what I think based on my own finding/testing.
*************** DISCLAIMER ***************
Now that this is out of the way, here we go!
When you do an aux, the source MA keeps asking the destination MA to check if each block signature is present in its DDB. This causes a lot of small chatter on the link which is very bad on high latency links (in this case I consider 12ms as high latency).
So, to help reduce this, I needed a way to remove the latency from the aux process. The first attempt was to copy the destination DDB back to my source MA.
This didn't help much as the default MA for the destination maglib was still the destination MA so my guess was that the chatter was still occuring over the network because the destination MA was still identified as the default MA. Unfortunately, you can't change that to a MA that doesn't have access to the maglib so I then shared all the destination maglib paths with my source MA. Since my storage was all block at the time, I basically used \\DestinationMA\d$\MagLibPaths with a specific user account and added all the destination paths to my local MA going through the destination's MA UNC paths. Using a NAS would have been better but that was not an option for me.
Anyway, once I added all the paths to the source MA, I was then able to set my source MA as the default MA for that storage policy copy and this in turn made my aux copies go past 1TB/hr (when no data needed to be copied which should be most of it when copying your pre-seeded fulls). Obviously, when copying actual new unique data, you're bound to the speed limit of the physical link.
So, the final config looked like this:
1. Source MA hosts both source and destination DDB on SSD.
2. Source MA has all the destination maglib paths visible and is the default MA for the destination DDB / secondary copies pointing to that DDB.
3. Destination MA is now only acting as a "proxy" to the block maglib (could be avoided if using CIFS NAS).
4. Secondary copies are configured for disk optimized DASH copies.
5. In v9, I was using the lookAheadReader key but in V10 I didn't find it to provide any improvements as the DDB was going much faster than the v9 DDB.
Now, a few question you might ask:
1. If the DDB is on the source MA and I lose the whole site, am I screwed? No. DDB is not needed for restore. If you have a whole DR site at your destination environment, you can start backing it up using another DDB.
2. I'm already using the destination DDB for local backups at that site. What now? I personally recommend not "sharing" the aux DDB with other backups. Here's my logic: the DDB has a front end / primary block limit. For simplicity sake, let's say that this limit is 100TB. If you share your destination DDB with other backups at the destination site, you run into the possibility of "filling up" one DDB before the other. Say your source front end data was originally 50TB and destination front end data backup 30TB. Now, your source has grown to 80TB and destination to 50TB. Well in theory you're past the DDB limit and performance will suffer both for your auxes and remote backups. I prefer to keep a local and destination DDB dedicated to a "data set" to avoid this issue. There is a small hit on disk usage but it should usually be very minimal.
3. I only have 1 SSD drive for my source MA, what do I do with the destination? If it has enough space, I would usually put it on the same drive if I know that backups won't really run at the same time as auxes (often the case) and/or if my testing shows the performance hit is minimal. Remember, you're probably still getting a better aux throughput because of it. Ideally, you might want to eventually put it on its own SSD.
4. Wouldn't the UseCacheDB and UseAuxcopyReadlessPlus have the same impact as what you described? You'd think so but in my case it didn't help. Remeber your CacheDB will usually be a very small subset of the actual destination DDB. Unless you allow it to grow to hundreds of GBs, there should always be a bunch of signature requests going over the link.
Wow, I think I wrote enough stuff for now. Sorry for the very long message but I prefer you know what you're getting into if you decide to try it out.