NDMP Slow Performance

Last post 09-03-2019, 7:49 AM by Wwong. 16 replies.
Sort Posts: Previous Next
  • NDMP Slow Performance
    Posted: 08-20-2019, 12:18 PM

    Trying to figure out how to get the most of NDMP Backup. Volume has 15TB, no qtrees and tons of small files. The NDMP (backup copy) from a snap take days, weeks to get to the primary (ESeries). Everything 10Gb, Chunk Size 16GB, Using Round Robin as data paths load balancing. 

    https://documentation.commvault.com/commvault/v11_sp15/article?p=8588.htm

    http://documentation.commvault.com/commvault/v11/article?p=19811.htm

    Getting stats from nasbackup log says 45.xx GBytes/Hour for the particular job. 

    stat- NB [Throughput][7944828][45.21 GBytes/Hour][7330.66 GBytes][162.16 Hours]

    stat- NB [AddFiles][7944828][8 Overall files/sec][2443.43 Processing files/sec][5130988 Files][583777 Overall Secs][2099.91 Processing Secs]

    NDMP INFO[storage] 1893: DUMP: Tue Aug 20 11:15:53 2019 : We have written 8616069105 KB.

    today completing 7 days running the job and still quite a bit to go. recommendations? logs to obtain more accurate info?

  • Re: NDMP Slow Performance
    Posted: 08-21-2019, 5:37 AM

    Hi pereirath

    From the logs this looks like it is coming from NASbackup.log on the MediaAgent. 

    When troubleshooting performance related issue, the most important aspect is to determine which component is the slowest. 

    For NDMP Backup (Backup Copy in this case), you have:

    • Read from the Filer (where the Snap is mounted)
    • Network Throughput to the MediaAgent 
    • MediaAgent writes to Targe Storage. 
    From the log snippet it looks like you are processing Read throughput of [45.21 GBytes/Hour] for the duration of the Job. If you look into CVD.log on the MediaAgent this will show you the Physical Write throughput.
     
    Verify the above component, and it will provide you with a clearer overview on what can be done next.
     
    Regards
     
    Winston 
  • Re: NDMP Slow Performance
    Posted: 08-21-2019, 9:03 AM

    Thanks for your response. The dump process happens right there on the media agents, I have shared mounts among all the 4 media agents which is essentially a NetApp ESeries Array.

    Does the logs shime a light?

    CVD.log

    Line 15878: 11612 2fd0  08/21 07:35:41 7504229 7944828-5902984 [    DM_BASE    ] Closed the chunk with Id 19233094. Got New Chunk Id 19233404 from ArMgr

    Line 15879: 11612 2fd0  08/21 07:35:41 7504229 7944828-5902984 [    DM_BASE    ] Creating new chunk id 19233404 VolId= 2982868

    Line 15880: 11612 2fd0  08/21 07:35:41 7504229 7944828-5902984 [    MEDIAFS    ] SingleInstancer_FS Created [C:\MountPoints\MA2_108\MA002_LUN017\CV_MAGNETIC\V_2982868\CHUNK_19233404] Chunk Directory

    Line 15881: 11612 2fd0  08/21 07:35:41 7504229 7944828-5902984 [    DSBACKUP   ] Updating the Job Manager that the chunk has been committed 

    Line 15882: 11612 2fd0  08/21 07:35:41 7504229 7944828-5902984 [    DSBACKUP   ] Updating Index with offset 0, m_LastCommittedRestartOffset= 0,  m_currentRestartContext = 0

    Line 15909: 11612 28b4  08/21 07:38:07 7504229 7944828-5902984 [    DM_BASE    ] FlushAndUpdateDDB [6530]: Going to commit info in DDB. PriIds [0], Recs [31], PriCommitData [0] ChunkSize [4294983891]

    Line 15916: 11612 5ea8  08/21 07:39:45 7504229 7944828-5902984 [    MEDIAFS    ] stat- ID [SI-FS Write Speed] Curr Avg [533.575755] MB/Sec, Bytes [96002659]; Total Avg [510.030394] MB/Sec, Bytes [109599930467], Time [204.93] Secs

    Line 15919: 11612 1688  08/21 07:40:04 7504229 7944828-5902984 [    DM_BASE    ] FlushAndUpdateDDB [6530]: Going to commit info in DDB. PriIds [0], Recs [64], PriCommitData [0] ChunkSize [8590168588]

    Line 15922: 11612 561c  08/21 07:41:33 7504229 SdtTail::logStats: Clnt [media agent], PId [5364], Cnt - BsyProcsg/Allocs [1775/77687053]; Time - Procsg/NwRecv/Total [1651/656020/657671] secs, Bytes recvd [114175676295] RCId [7944828]

    Line 15923: 11612 2e48  08/21 07:41:36 7504229 7944828-5902984 [    DM_BASE    ] FlushAndUpdateDDB [6530]: Going to commit info in DDB. PriIds [0], Recs [100], PriCommitData [0] ChunkSize [12885353526]

    Line 15942: 11612 5ea8  08/21 07:43:09 7504229 7944828-5902984 [    DM_BASE    ] FlushAndUpdateDDB [6530]: Going to commit info in DDB. PriIds [0], Recs [126], PriCommitData [0] ChunkSize [17180425704]

    Line 15952: 11612 36c0  08/21 07:44:48 7504229 7944828-5902984 [    DM_BASE    ] FlushAndUpdateDDB [6530]: Going to commit info in DDB. PriIds [0], Recs [50], PriCommitData [0] ChunkSize [21475463570]

    Line 17270: 11612 2e48  08/21 07:46:27 7504229 7944828-5902984 [    DM_BASE    ] FlushAndUpdateDDB [6530]: Going to commit info in DDB. PriIds [0], Recs [714], PriCommitData [0] ChunkSize [25770498743]

    Line 18013: 11612 1860  08/21 07:47:52 7504229 7944828-5902984 [    DM_BASE    ] FlushAndUpdateDDB [6530]: Going to commit info in DDB. PriIds [0], Recs [97], PriCommitData [0] ChunkSize [30065489547]

    Line 18022: 11612 9d0   08/21 07:49:15 7504229 7944828-5902984 [    DM_BASE    ] FlushAndUpdateDDB [6530]: Going to commit info in DDB. PriIds [0], Recs [100], PriCommitData [0] ChunkSize [34360557269]

    Line 18023: 11612 18b4  08/21 07:49:46 7504229 7944828-5902984 [    MEDIAFS    ] stat- ID [SI-FS Write Speed] Curr Avg [608.159213] MB/Sec, Bytes [262809444]; Total Avg [510.227334] MB/Sec, Bytes [109862739911], Time [205.35] Secs

    Line 18027: 11612 3b24  08/21 07:50:45 7504229 7944828-5902984 [    DM_BASE    ] FlushAndUpdateDDB [6530]: Going to commit info in DDB. PriIds [0], Recs [75], PriCommitData [0] ChunkSize [38655678061]

    Line 18029: 11612 561c  08/21 07:51:35 7504229 SdtTail::logStats: Clnt [media agent], PId [5364], Cnt - BsyProcsg/Allocs [1777/77977168]; Time - Procsg/NwRecv/Total [1655/656618/658273] secs, Bytes recvd [114457937678] RCId [7944828]

    Line 18046: 11612 43b0  08/21 07:52:11 7504229 7944828-5902984 [    DM_BASE    ] FlushAndUpdateDDB [6530]: Going to commit info in DDB. PriIds [0], Recs [964], PriCommitData [0] ChunkSize [42950744085]

    Line 18057: 11612 e98   08/21 07:53:57 7504229 7944828-5902984 [    DM_BASE    ] FlushAndUpdateDDB [6530]: Going to commit info in DDB. PriIds [0], Recs [166], PriCommitData [0] ChunkSize [47245737476]



  • Re: NDMP Slow Performance
    Posted: 08-23-2019, 10:01 PM

    Hi pereirath

    The logs only show the processing during the course of the NDMP Backup. Usually at the end of the Job or when a stream completes, that is when Commvault reports the physical write speed to Media. 

    Do you have CVD.log near the end of the NDMP backup Job. Again this is to isloate the issue and find out which component could be the cause. 

    We already acknowledge the read from Netapp is [45.21 GBytes/Hour], during the duration of the Backup. Now we just want to confirm the write (CVD) which will then indicate whether the bottleneck is writes or read. Note - we have not considered Network yet, but will need to verify this as well 

    Regards

    Winston

  • Re: NDMP Slow Performance
    Posted: 08-27-2019, 12:05 PM

    Job was completed few days ago. Cant locate the CVD for that particular job.

  • Re: NDMP Slow Performance
    Posted: 08-27-2019, 6:00 PM

    Hi pereirath 

    I would recommend to increase the logging for CVD and NASBackup, so we have and end-to-end holistic overview of the performance

    Regards

    Winston

  • Re: NDMP Slow Performance
    Posted: 08-27-2019, 6:03 PM

    So other than that there is no way to find out the performance issue? No other tool, or troubleshooting path? How to increase the cv and nas logging?

  • Re: NDMP Slow Performance
    Posted: 08-29-2019, 5:10 AM

    Hi pereirath

    There are various tools within Commvault and NetApp that can assist to isolate issue with performance

    • Netapp (DUMP to Null) - to test performance of the DUMP process
    • Commvault (CVDiskPerf) - testing storage performance 
    • Commvault (CVNetworkTestTool) - test network between client and MA

    However there is no point just using the tools without actually understanding what is the bottleneck

    The logs from Commvault is to assist with the analysis before you proceed to use tools to test the environment 

    I would recommend to engage Commvault Support to assist and help you narrow down the bottleneck

    Kind regards

    Winston 

  • Re: NDMP Slow Performance
    Posted: 08-29-2019, 9:24 AM

    I do have a ticket open and the first thing I heard was to call NetApp and analyze with them since this is "storage issue".

  • Re: NDMP Slow Performance
    Posted: 08-29-2019, 4:39 PM

    Further more, I see from CVD/MediaManager and NASBACKUP log that the block size is set to 64KB as default (stated here too: https://documentation.commvault.com/commvault/v11/article?p=8593.htm).

    So, I since we are talking about NDMP and my array being NetApp I reverted back to their docs, and they say that NetApp version 9 and above support 7285344KB for sending and receiving buffer. Source: https://kb.netapp.com/app/answers/answer_view/a_id/1036186/~/how-to-set-or-change-the-tcp-packet-window-size-for-network-data-management

    A different document specifically for Commvault and NetApp ESeries customers (https://www.netapp.com/us/media/tr-4320.pdf) states the following:  "For RAID 6 Volume Groups, select segment size in 64KB increments. Default segment size (file system typical) is 128KB"

     

    So trying to put pieces of this puzzle together. Some places they call block size, some others buffer size while (correct if im wrong) in the context we are talking they are the same.

    So it is true that Commvault is using a 64KB block size, and NetApp is configured to support up to 128KB (but not currently configured as that size). 

     

    1. In my architecture my storare (Storage A) that hosts all the snapshots from remote initiates the NDMP session with the 4 media agents with the mount paths to the ESeries array (Storage B). Like I said before log shows as 64KB size, however  storage A config shows tcpwinsize as 32768 which leads me to believe that Commvault isnt reporting the correct Block size being received from Storage A. What do you think?
    2. According to my second link shows, 128KB should be the configuration for Commvault V11 and ESeries. So increasing commvault Block Size to 128KB would be the recommended in this case? 
    Do you have a file where specifically call out the ONTAP Configuration? This one https://documentation.commvault.com/commvault/v11/article?p=8593.htm doesnt talk much about ONTAP. 
    Thanks for your help and again
  • Re: NDMP Slow Performance
    Posted: 08-29-2019, 4:57 PM

    45 gb / hour is pretty slow. Both the NetApp file server and Commvault should be able to go much faster than that. 

    After a backup job completes, the best place to start to analyze the performance is the CVPerfMgr.log on the MediaAgent. In the example log below, the "NDMP Remote Server" information shows 2.45% of the time processing data from the file server and 97.53% of the time processing the pipeline. So in this case the next step would be to investigate why the pipeline is the bottleneck. The "Writer Pipeline Modules" shows a breakdown of where the pipeline is spending time - in this case the media speeds look good but time is spend in the BufferReceive due to network slowness. If the file server Data Socket shows slowness, we would need to investigate the file server side - a good first step is to confirm what interface on the file server is being used - the IP address is logged in the CVNdmpRemoteServer log and we and confirm with the customer that this is a high speed interface on the file server. After that there are additional commands that can be run on the file server to determine if there is network slowness (to see data queued to be sent) or over-loaded CPU.

    While a backup is running, the same "NDMP Remote Server" information can be found in the CVNdmpRemoteServer log (it is logged about every 15 minutes). And statistics similar to the Writer Pipeline Modules can be found in the CVD log. Some example CVNdmpRemoteServer logging is below. Typically I like to see 300gb / hour or more per NDMP stream - but that can vary  depending on the load on the file server.

    There are additional statistics in the NasBackup log for how fast we are processing file lists. I have not typically found any performance bottlenecks in the file list processing so I am not going to into detail now. Please do not increase the logging in NasBackup - your logs will roll over too quickly.

    I can help you analyze your performance - there is a ticket open - feel free to email me the ticket info and I can look at the logs. If there seems to be slowness in the pipeline, we may need to get help from the Media Management team to see if it is network, compression, dedup, disk speed. They will be able to tell us if there are any block size settings to improve performance.

    Thanks, Duncan

    Example CVPerfMgr.log
    |*29755921*|*Perf*|7644613| NDMP Remote Server
    |*29755921*|*Perf*|7644613| |_NDMP Data Receiver.......................... 206439 1147725280256 [1068.90 GB] [18.64 GBPH]
    |*29755921*|*Perf*|7644613| |_Reader Data Socket[percent:2.45]............ 5051
    |*29755921*|*Perf*|7644613| |_Data Server Wait Time[percent:2.05]......... 4241
    |*29755921*|*Perf*|7644613| |_Data Server Read Time[percent:0.39]......... 810
    |*29755921*|*Perf*|7644613| |_Reader Pipeline Modules[percent:97.53]...... 201342
    |*29755921*|*Perf*|7644613| |_Pipeline write[percent:0.15]................ 301
    |*29755921*|*Perf*|7644613| |_Buffer allocation[percent:97.24]............ 200747
    |*29755921*|*Perf*|7644613| |_CVA Wait to received data from reader....... 206208
    |*29755921*|*Perf*|7644613| |_CVA Buffer allocation....................... -
    |*29755921*|*Perf*|7644613|
    |*29755921*|*Perf*|7644613| Writer Pipeline Modules
    |*29755921*|*Perf*|7644613| |_DSBackup: BufferRecieve..................... 205433
    |*29755921*|*Perf*|7644613| |_DSBackup: Update Restart Info............... 2
    |*29755921*|*Perf*|7644613| |_DSBackup: Media Write....................... 307 22151950551 [20.63 GB] [241.92 GBPH]
    |*29755921*|*Perf*|7644613| |_SIDB:CommitAndUpdateRecs[ehss-vcva-02-a].... 36
    |*29755921*|*Perf*|7644613| |_SIDB:CommitAndUpdateRecs[ehss-vcva-03-a].... 4
    |*29755921*|*Perf*|7644613| |_Writer: DM: Physical Write.................. 38 21846028498 [20.35 GB] [1927.49 GBPH]
    CVNdmpRemoteServer log - data flowing well
    => Over 90% of the time processing the file server Data Socket
    => End to end speed 90.35 mb/sec = 317.6 gb / hour
    stat- NRS ======================= Performance Statistics for last [900] seconds ========================
    stat- NRS [Interval End to End ][90.35 MBytes/Sec][85261438976 Bytes][900 Secs]
    stat- NRS [Interval Data Socket][98.49 MBytes/Sec][85261438976 Bytes][91.7% (81.6% Wait)][825 Secs (734 Wait Secs)]
    stat- NRS [Interval Pipeline ][1136.79 MBytes/Sec][85261438976 Bytes][7.9% (4.1% Write, 2.1% Buf Alloc, 1.4% Buf Copy, 0.0% Buf Free)][71 Secs (36 Write Secs, 18 Buf Alloc Secs, 12 Buf Copy Secs, 0 Buf Free Secs)]

    CVNdmpRemoteServer log - pipeline taking time
    stat- NRS ======================= Performance Statistics for last [900] seconds ========================
    stat- NRS [Interval End to End ][2.84 MBytes/Sec][2684265472 Bytes][900 Secs]
    stat- NRS [Interval Data Socket][159.06 MBytes/Sec][2684265472 Bytes][1.8% (1.6% Wait)][16 Secs (14 Wait Secs)]
    stat- NRS [Interval Pipeline ][2.90 MBytes/Sec][2684265472 Bytes][98.2% (2.6% Write, 95.4% Buf Alloc, 0.1% Buf Copy, 0.0% Buf Free)][883 Secs (22 Write Secs, 858 Buf Alloc Secs, 0 Buf Copy Secs, 0 Buf Free Secs)]

    CVNdmpRemoteServer log - receiving data from file server taking time
    => Very little time in the pipeline (4.8%)
    => End to end speed 48.99 mb/sec = 172 gb / hr.
    30660 81a8 03/22 11:59:47 7130622 stat- NRS ======================= Performance Statistics for last [900] seconds ========================
    30660 81a8 03/22 11:59:47 7130622 stat- NRS [Interval End to End ][48.99 MBytes/Sec][46231892992 Bytes][900 Secs]
    30660 81a8 03/22 11:59:47 7130622 stat- NRS [Interval Data Socket][51.54 MBytes/Sec][46231892992 Bytes][95.1% (83.9% Wait)][855 Secs (754 Wait Secs)]
    30660 81a8 03/22 11:59:47 7130622 stat- NRS [Interval Pipeline ][1020.46 MBytes/Sec][46231892992 Bytes][4.8% (1.1% Write, 2.7% Buffer allocation)][43 Secs (9 Write Secs, 24 Buffer
     
  • Re: NDMP Slow Performance
    Posted: 08-29-2019, 5:22 PM

    Thanks Duncan, I didnt see your email but I have sent you the ticket as a private message. Meanwhile will look at the logs you mentioned on my end.

  • Re: NDMP Slow Performance
    Posted: 08-29-2019, 6:17 PM

    it doesnt look good.

    10072 5334  08/29 16:39:55 7608600 stat- NRS =================================== Total Performance Statistics ===================================

    10072 5334  08/29 16:39:55 7608600 stat- NRS [Total End to End ][15.47 MBytes/Sec][8699256340480 Bytes][536321 Secs]

    10072 5334  08/29 16:39:55 7608600 stat- NRS [Total Data Socket][96.78 MBytes/Sec][8699256340480 Bytes][16.0% (13.2% Wait)][85722 Secs (70604 Wait Secs)]

    10072 5334  08/29 16:39:55 7608600 stat- NRS [Total Pipeline   ][18.43 MBytes/Sec][8699256340480 Bytes][83.9% (1.5% Write, 81.7% Buf Alloc, 0.6% Buf Copy, 0.0% Buf Free)][450056 Secs (7996 Write Secs, 438161 Buf Alloc Secs, 3164 Buf Copy Secs, 0 Buf Free Sec)]

    10072 3600  08/29 16:43:42 7608600 TPool [SdtHeadThPool]. Ser# [0] Tot [19919], Pend [1], Comp [19918], Max Par [16], Time (Serial) [778.911360]s, Time (Parallel) [292.832602]s, Wait [3.023453]s

     

     

    |*8045861*|*Perf*|7608600| Job-ID: 7608600            [Pipe-ID: 8045861]            [App-Type: 13]            [Data-Type: 1]

    |*8045861*|*Perf*|7608600| Stream Source:   MediaAgent

    |*8045861*|*Perf*|7608600| Simpana Network medium:   SDT

    |*8045861*|*Perf*|7608600| Head duration (Local):  [24,August,19 06:40:03  ~  29,August,19 07:46:34] 121:06:31 (435991)

    |*8045861*|*Perf*|7608600| Tail duration (Local):  [24,August,19 06:40:03  ~  29,August,19 07:46:34] 121:06:31 (435991)

    |*8045861*|*Perf*|7608600| ----------------------------------------------------------------------------------------------------------------------------------------

    |*8045861*|*Perf*|7608600|     Perf-Counter                                                                     Time(seconds)              Size

    |*8045861*|*Perf*|7608600| ----------------------------------------------------------------------------------------------------------------------------------------

    |*8045861*|*Perf*|7608600| 

    |*8045861*|*Perf*|7608600| NDMP Remote Server

    |*8045861*|*Perf*|7608600|  |_NDMP Data Receiver..............................................................    435824             6973000815616  [6494.11 GB] [53.64 GBPH]

    |*8045861*|*Perf*|7608600|    |_Reader Data Socket[percent:16.21].............................................     70656                          

    |*8045861*|*Perf*|7608600|      |_Data Server Wait Time[percent:13.26]........................................     57775                          

    |*8045861*|*Perf*|7608600|      |_Data Server Read Time[percent:2.96].........................................     12881                          

    |*8045861*|*Perf*|7608600|    |_Reader Pipeline Modules[percent:83.69]........................................    364733                          

    |*8045861*|*Perf*|7608600|      |_Pipeline write[percent:1.41]................................................      6147                          

    |*8045861*|*Perf*|7608600|      |_Buffer allocation[percent:81.55]............................................    355405                          

    |*8045861*|*Perf*|7608600|      |_CVA Wait to received data from reader.......................................    429935                          

    |*8045861*|*Perf*|7608600|      |_CVA Buffer allocation.......................................................         -                          

    |*8045861*|*Perf*|7608600|      |_SDT: Receive Data...........................................................     77165             7005423621328  [6524.31 GB]  [Samples - 238097404] [Avg - 0.000324] [304.38 GBPH]

    |*8045861*|*Perf*|7608600|      |_SDT-Head: Compression.......................................................    364869             7005423563792  [6524.31 GB]  [Samples - 238097403] [Avg - 0.001532] [64.37 GBPH]

    |*8045861*|*Perf*|7608600|      |_SDT-Head: Signature module..................................................    400983             5513921471480  [5135.24 GB]  [Samples - 238097403] [Avg - 0.001684] [46.10 GBPH]

    |*8045861*|*Perf*|7608600|        |_SDT-Head: Signature Compute...............................................     37069             5477000703510  [5100.85 GB] [495.38 GBPH]

    |*8045861*|*Perf*|7608600|        |_Src-side Dedup............................................................    362878                          

    |*8045861*|*Perf*|7608600|          |_Buffer allocation.......................................................       423                          

    |*8045861*|*Perf*|7608600|          |_Passing to next module..................................................       849                          

    |*8045861*|*Perf*|7608600|          |_Sig-lookup..............................................................    357134                          

    |*8045861*|*Perf*|7608600|            |_SIDB-Lookup...........................................................    356550                            [Samples - 32353479] [Avg - 0.011020]

    |*8045861*|*Perf*|7608600|              |_SIDB:CL-QueryInsert[MediaAgent].....................................     89379                          

    |*8045861*|*Perf*|7608600|              |_SIDB:CL-QueryInsert[MediaAgent].....................................     85005                          

    |*8045861*|*Perf*|7608600|              |_SIDB:CL-QueryInsert[MediaAgent]……………………………….    107191                          

    |*8045861*|*Perf*|7608600|              |_SIDB:CL-QueryInsert[MediaAgent]……………………………….     73702                          

    |*8045861*|*Perf*|7608600|            |_Source Side Dedupe stats..............................................         -                          

    |*8045861*|*Perf*|7608600|              |_[Signature Processed]...............................................         -                  32353479  [30.85 MB]

    |*8045861*|*Perf*|7608600|              |_[New Signatures]....................................................         -                    524750  [512.45 KB]

    |*8045861*|*Perf*|7608600|              |_[Signatures Found in DDB]...........................................         -                  31828729  [30.35 MB]

    |*8045861*|*Perf*|7608600|              |_[Application Data size].............................................         -             6908925505536  [6434.44 GB]

    |*8045861*|*Perf*|7608600|              |_[Processed Data size]...............................................         -             5475279745354  [5099.25 GB]

    |*8045861*|*Perf*|7608600|              |_[New Data size].....................................................         -               86287925943  [80.36 GB]

    |*8045861*|*Perf*|7608600|              |_[Dropped Data size (percent[98.42])]................................         -             5388991819411  [5018.89 GB]

    |*8045861*|*Perf*|7608600|              |_[Non-dedupable data size]...........................................         -               34076912214  [31.74 GB]

    |*8045861*|*Perf*|7608600|      |_SDT-Head: Encryption........................................................      6281              108144712088  [100.72 GB]  [Samples - 57395523] [Avg - 0.000109] [57.73 GBPH]

    |*8045861*|*Perf*|7608600|      |_SDT-Head: CRC32 update......................................................       352              108083713843  [100.66 GB]  [Samples - 57395523] [Avg - 0.000006] [1029.49 GBPH]

    |*8045861*|*Perf*|7608600|      |_SDT-Head: Network transfer..................................................      1644              108083713843  [100.66 GB]  [Samples - 57395523] [Avg - 0.000029] [220.43 GBPH]

    |*8045861*|*Perf*|7608600| 

    |*8045861*|*Perf*|7608600| Writer Pipeline Modules

    |*8045861*|*Perf*|7608600|  |_SDT-Tail: Wait to receive data from source......................................    434608              108083771379  [100.66 GB]  [Samples - 57395524] [Avg - 0.007572] [0.83 GBPH]

    |*8045861*|*Perf*|7608600|  |_SDT-Tail: Writer Tasks..........................................................      2442              108083713843  [100.66 GB]  [Samples - 57395523] [Avg - 0.000043] [148.39 GBPH]

    |*8045861*|*Perf*|7608600|    |_DSBackup: Update Restart Info.................................................         7                          

    |*8045861*|*Perf*|7608600|    |_DSBackup: Media Write.........................................................      1728              105746321691  [98.48 GB] [205.17 GBPH]

    |*8045861*|*Perf*|7608600|      |_SIDB:CommitAndUpdateRecs[MediaAgent]………………………………….        29                          

    |*8045861*|*Perf*|7608600|      |_SIDB:CommitAndUpdateRecs[MediaAgent]........................................        32                          

    |*8045861*|*Perf*|7608600|      |_SIDB:CommitAndUpdateRecs[MediaAgent]………………………………….        49                          

    |*8045861*|*Perf*|7608600|      |_SIDB:CommitAndUpdateRecs[MediaAgent]………………………………….        37                          

    |*8045861*|*Perf*|7608600|      |_Writer: DM: Physical Write..................................................       233              104805125340  [97.61 GB] [1508.10 GBPH]

    |*8045861*|*Perf*|7608600| 



  • Re: NDMP Slow Performance
    Posted: 08-30-2019, 2:49 PM

    Did you have a chance to take a look at my ticket?

  • Re: NDMP Slow Performance
    Posted: 08-31-2019, 7:06 AM

    Hi Pereirath

    From the review of the logs provided, the bottleneck looks to be the network link between the Filer to the MediaAgent. 

    We can see the Source is able to process the read at 40+ GB/hr. However while the data transfers across the network to the MediaAgent (before the Physical Write) it is receiving it at 0.83 GBHR.

    |*8045861*|*Perf*|7608600|      |_SDT-Head: Network transfer..................................................      1644              108083713843  [100.66 GB]  [Samples - 57395523] [Avg - 0.000029] [220.43 GBPH]
    |*8045861*|*Perf*|7608600| 
    |*8045861*|*Perf*|7608600| Writer Pipeline Modules
    |*8045861*|*Perf*|7608600|  |_SDT-Tail: Wait to receive data from source......................................    434608              108083771379  [100.66 GB]  [Samples - 57395524] [Avg - 0.007572] [0.83 GBPH]

    As you have mentioned the Data Path is configured to be round robin across MA's, does that mean there is more then 1 MA involved?

    Might be worthwhile to check the link speed between Filer to MA, and also confirm it is using a 10 Gbit link (the LIF's on the Filer could be configured incorrectly) causing the slowness. 

    Regards

    Winston 

  • Re: NDMP Slow Performance
    Posted: 08-31-2019, 9:14 AM

    Ok, will take a look at that. Can you pull the ticket and assign to yourself? I dont know if I can do that from my end? Plus I dont have emails or full names so I can request the current support person to assign to somebody else.

  • Re: NDMP Slow Performance
    Posted: 09-03-2019, 7:49 AM

    HI pereirath

    I am currently looking into this matter for you 

    Regards

    Winston 

The content of the forums, threads and posts reflects the thoughts and opinions of each author, and does not represent the thoughts, opinions, plans or strategies of Commvault Systems, Inc. ("Commvault") and Commvault undertakes no obligation to update, correct or modify any statements made in this forum. Any and all third party links, statements, comments, or feedback posted to, or otherwise provided by this forum, thread or post are not affiliated with, nor endorsed by, Commvault.
Commvault, Commvault and logo, the “CV” logo, Commvault Systems, Solving Forward, SIM, Singular Information Management, Simpana, Commvault Galaxy, Unified Data Management, QiNetix, Quick Recovery, QR, CommNet, GridStor, Vault Tracker, InnerVault, QuickSnap, QSnap, Recovery Director, CommServe, CommCell, SnapProtect, ROMS, and CommValue, are trademarks or registered trademarks of Commvault Systems, Inc. All other third party brands, products, service names, trademarks, or registered service marks are the property of and used to identify the products or services of their respective owners. All specifications are subject to change without notice.
Close
Copyright © 2019 Commvault | All Rights Reserved. | Legal | Privacy Policy