Discussions
Categories
- 17.9K All Categories
- 3.4K Industry Applications
- 3.3K Intelligent Advisor
- 62 Insurance
- 536K On-Premises Infrastructure
- 138.2K Analytics Software
- 38.6K Application Development Software
- 5.7K Cloud Platform
- 109.4K Database Software
- 17.5K Enterprise Manager
- 8.8K Hardware
- 71.1K Infrastructure Software
- 105.2K Integration
- 41.5K Security Software
The doubt about the rman restore performance based in ASM

Enviroment: RHEL 6.9 udev asmdisk DB 11.2.0.4
1) Scene A: MEMORY_TARGET=4GB PGA_TARGET=1GB with default _backup_disk/file_bufsz/cnt values
The restore full completed for 42 Hrs with the approximate 21MB PGA used each channel.
COMPONENT CURRENT_SIZE MIN_SIZE USER_SPECIFIED_SIZE TYPE
---------------------------------------------------------------- ------------ ---------- ------------------- -------------
shared pool 656 528 0 GROW
large pool 416 416 0 STATIC
java pool 64 64 0 STATIC
streams pool 0 0 0 STATIC
SGA Target 3072 3072 0 STATIC
DEFAULT buffer cache 1888 1888 0 SHRINK
KEEP buffer cache 0 0 0 STATIC
RECYCLE buffer cache 0 0 0 STATIC
DEFAULT 2K buffer cache 0 0 0 STATIC
DEFAULT 4K buffer cache 0 0 0 STATIC
DEFAULT 8K buffer cache 0 0 0 STATIC
DEFAULT 16K buffer cache 0 0 0 STATIC
DEFAULT 32K buffer cache 0 0 0 STATIC
Shared IO Pool 0 0 0 STATIC
PGA Target 1024 1024 1024 STATIC
ASM Buffer Cache 0 0 0 STATIC
16 rows selected.
PARAMETER VALUE DESCRIPTION Default?
--------------------- ------------------ -------------------------------------------------- ----------
_pga_max_size 209715200 Maximum size of the PGA memory for one process TRUE
_smm_max_size 102400 maximum work area size in auto mode (serial) TRUE
_smm_px_max_size 524288 maximum work area size in auto mode (global) TRUE
pga_aggregate_target 1073741824 Target size for the aggregate PGA memory consumed FALSE
by the instance
KSPPINM KSPPSTVL KSPPDESC
------------------------------ --------------- -------------------------------------------------------
_backup_disk_io_slaves 0 BACKUP Disk I/O slaves
_backup_ksfq_bufcnt_max 64 maximum number of buffers used for backup/restore
_backup_ksfq_bufsz 0 size of buffers used for backup/restore
_backup_ksfq_bufcnt 0 number of buffers used for backup/restore
_backup_disk_bufsz 0 size of buffers used for DISK channels
_backup_disk_bufcnt 0 number of buffers used for DISK channels
_backup_file_bufsz 0 size of buffers used for file access
_backup_file_bufcnt 0 number of buffers used for file access
8 rows selected.
SQL>
SQL>
SPID PROGRAM EVENT PGA_USED_MEM PGA_MAX_MEM
------------------------ ------------------------------------ ------------------------------ ------------ -----------
133295 [email protected] (TNS V1-V3) SQL*Net message from client 106519950 113586358
133308 [email protected] (TNS V1-V3) RMAN backup & recovery I/O 21421923 24588470
133306 [email protected] (TNS V1-V3) RMAN backup & recovery I/O 21421539 24588470
133310 [email protected] (TNS V1-V3) RMAN backup & recovery I/O 21421499 24588470
133305 [email protected] (TNS V1-V3) RMAN backup & recovery I/O 20685411 23998646
133302 [email protected] (TNS V1-V3) SQL*Net message from client 980323 1913014
SQL>
SQL>
SQL> SELECT SUM(pga_used_mem), SUM(pga_alloc_mem), SUM(pga_max_mem) FROM v$process p;
SUM(PGA_USED_MEM) SUM(PGA_ALLOC_MEM) SUM(PGA_MAX_MEM)
----------------- ------------------ ----------------
76895603 83603020 90222156
SQL>
SQL>
2) Scene B: just only adjust the default _backup_disk/file_bufsz/cnt values with the following custom valus:
The same restore full completed for 40 Hrs with the approximate 500MB PGA used each channel.
COMPONENT CURRENT_SIZE MIN_SIZE MAX_SIZE USER_SPECIFIED_SIZE TYPE
---------------------------------------------------------------- ------------ ---------- ---------- ------------------- -
shared pool 656 528 656 0 GROW
large pool 416 416 416 0 STATIC
java pool 64 64 64 0 STATIC
streams pool 0 0 0 0 STATIC
SGA Target 3072 3072 3072 0 STATIC
DEFAULT buffer cache 1888 1888 1920 0 SHRINK
KEEP buffer cache 0 0 0 0 STATIC
RECYCLE buffer cache 0 0 0 0 STATIC
DEFAULT 2K buffer cache 0 0 0 0 STATIC
DEFAULT 4K buffer cache 0 0 0 0 STATIC
DEFAULT 8K buffer cache 0 0 0 0 STATIC
DEFAULT 16K buffer cache 0 0 0 0 STATIC
DEFAULT 32K buffer cache 0 0 0 0 STATIC
Shared IO Pool 0 0 0 0 STATIC
PGA Target 1024 1024 1024 1024 STATIC
ASM Buffer Cache 0 0 0 0 STATIC
16 rows selected.
SQL>
SQL>
KSPPINM KSPPSTVL KSPPDESC
------------------------------ --------------- -------------------------------------------------------
_backup_disk_io_slaves 0 BACKUP Disk I/O slaves
_backup_ksfq_bufcnt_max 64 maximum number of buffers used for backup/restore
_backup_ksfq_bufsz 0 size of buffers used for backup/restore
_backup_ksfq_bufcnt 0 number of buffers used for backup/restore
_backup_disk_bufsz 4194304 size of buffers used for DISK channels
_backup_disk_bufcnt 16 number of buffers used for DISK channels
_backup_file_bufsz 4194304 size of buffers used for file access
_backup_file_bufcnt 16 number of buffers used for file access
8 rows selected.
SQL>
SQL>
SPID PROGRAM EVENT PGA_USED_MEM PGA_MAX_MEM
------------------------ ------------------------------------------------ --------------------------------- ------------
427743 [email protected] (TNS V1-V3) Backup: MML read backup piece 541512974 549973526
427750 [email protected] (TNS V1-V3) Backup: MML read backup piece 541474342 549793974
427745 [email protected] (TNS V1-V3) Backup: MML read backup piece 541397894 549793974
427747 [email protected] (TNS V1-V3) Backup: MML read backup piece 541364902 549793974
427735 [email protected] (TNS V1-V3) SQL*Net message from client 106519950 113586358
427742 [email protected] (TNS V1-V3) SQL*Net message from client 1129766 1454262
6 rows selected.
SQL>
SQL>
SQL> SELECT SUM(pga_used_mem), SUM(pga_alloc_mem), SUM(pga_max_mem) FROM v$process p;
SUM(PGA_USED_MEM) SUM(PGA_ALLOC_MEM) SUM(PGA_MAX_MEM)
----------------- ------------------ ----------------
2350909654 2396434428 2396434428
SQL>
SQL>
So my question is why the restore performance of the two scenes differ just only with small 5% while the PGA_USED_MEM used each channel of the two scences differ with 10 multiples at least.
Answers
-
Hi,
What is the size of database you are restoring ?
Those parameters acts as driver to backups and restore , pls share the above information, it would be helpful to suggest further
Thanks
Pavan kumar N
-
19TB.
And it seems that there is no any way to manually adjust & optimize the equivalent ASM disks I/O buffer size or cnts.
-
These days we found that the too slow restore problem had nothing to do with the rman itself .......
-
so would you care to fill us in on what it actually was, since you now assert that it "had nothing to do with the rman itself"?
-
Because we tested the 3rd Scene C via increasing the pga_target size from 2GB to 3GB.
Thus the average EBPS nearly doubled from 8MB to 15MB, and meanwhile the ratio of long_waits/io_counts all decreased below 10%。
So at least the problem at DB internal layer could be basically eliminated, but the total full rman restore still kept almost the same 40Hrs !
SQL>
SQL>
SQL> select avg(EFFECTIVE_BYTES_PER_SECOND) from v$backup_async_io;
AVG(EFFECTIVE_BYTES_PER_SECOND)
-------------------------------
15078774.1
IO_COUNT READY SHORT_WAITS LONG_WAITS FILENAME TO_CHAR(CLOSE_TIME, EPS
---------- ---------- ----------- ---------- -------------------------------------------------------- ------------------- ----------
27 26 0 1 +DATA/cpemsdb/data_d-cpemsdb_ts-h_tzxx_s1_fno-389 11/27/2020 23:46:53 394202
27 25 0 2 +DATA/cpemsdb/data_d-cpemsdb_ts-data_m_fno-198 11/27/2020 23:46:51 389805
7 7 0 0 +DATA/cpemsdb/data_d-cpemsdb_ts-bbxx_fno-38 11/27/2020 23:46:33 2097152
7682 7104 0 578 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_s_fno-676 11/27/2020 23:46:25 4145188
7682 6905 0 777 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_p_fno-612 11/27/2020 23:46:21 4141990
7682 7165 0 517 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_e_fno-515 11/27/2020 23:42:24 4266524
7682 7166 0 516 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_a_fno-401 11/27/2020 23:42:18 4265394
7682 7167 0 515 +DATA/cpemsdb/data_d-cpemsdb_ts-data_p_fno-316 11/27/2020 23:41:51 4276720
7682 7166 0 516 +DATA/cpemsdb/data_d-cpemsdb_ts-data_out_fno-260 11/27/2020 23:41:25 4288106
7682 7166 0 516 +DATA/cpemsdb/data_d-cpemsdb_ts-data_man_fno-203 11/27/2020 23:40:57 4301850
7682 7047 0 635 +DATA/cpemsdb/data_d-cpemsdb_ts-data_a_fno-43 11/27/2020 23:40:14 4322052
82537 76975 0 5562 11/27/2020 22:22:57 37828972
7682 7137 0 545 +DATA/cpemsdb/data_d-cpemsdb_ts-data_out_fno-781 11/27/2020 22:22:57 11422785
5122 4773 0 349 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_arc_fno-439 11/27/2020 22:16:12 8910721
4098 3821 0 277 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_e_fno-504 11/27/2020 22:10:47 8271483
4098 3821 0 277 +DATA/cpemsdb/data_d-cpemsdb_ts-data_arc_fno-115 11/27/2020 22:10:45 8263525
7682 6859 0 823 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_arc_fno-780 11/27/2020 21:44:07 28658590
82537 76630 0 5907 11/27/2020 21:44:07 44843779
5122 4746 0 376 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_arc_fno-438 11/27/2020 21:40:36 23650701
4098 3812 0 286 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_e_fno-498 11/27/2020 21:39:24 20773723
4098 3813 0 285 +DATA/cpemsdb/data_d-cpemsdb_ts-data_arc_fno-97 11/27/2020 21:39:22 20698638
27 25 0 2 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_yxbz_fno-689 11/27/2020 21:36:17 14979657
27 26 0 1 +DATA/cpemsdb/data_d-cpemsdb_ts-data_ics_fno-193 11/27/2020 21:36:15 20971520
27 25 0 2 +DATA/cpemsdb/data_d-cpemsdb_ts-hgqjd_fno-384 11/27/2020 21:36:15 20971520
7682 7168 0 514 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_s_fno-674 11/27/2020 21:36:11 5206442
7682 7168 0 514 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_out_fno-609 11/27/2020 21:36:10 5191338
7682 7169 0 513 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_e_fno-501 11/27/2020 21:36:10 5175491
7682 7168 0 514 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_a_fno-399 11/27/2020 21:36:10 5157261
7682 7169 0 513 +DATA/cpemsdb/data_d-cpemsdb_ts-data_p_fno-314 11/27/2020 21:36:06 5139980
7682 7169 0 513 +DATA/cpemsdb/data_d-cpemsdb_ts-data_out_fno-258 11/27/2020 21:36:02 5125259
7682 7168 0 514 +DATA/cpemsdb/data_d-cpemsdb_ts-data_man_fno-201 11/27/2020 21:35:57 5113056
82542 76562 0 5980 11/27/2020 21:35:52 44881392
7682 6787 0 895 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_arc_fno-779 11/27/2020 21:35:52 27531842
7682 7168 0 514 +DATA/cpemsdb/data_d-cpemsdb_ts-arc_r_data_fno-37 11/27/2020 21:35:49 5096876
5122 4753 0 369 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_arc_fno-437 11/27/2020 21:33:46 20930640
4098 3811 0 287 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_e_fno-496 11/27/2020 21:32:59 18046081
4098 3812 0 286 +DATA/cpemsdb/data_d-cpemsdb_ts-data_arc_fno-94 11/27/2020 21:32:57 17877075
27 25 0 2 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_y_fno-688 11/27/2020 21:25:50 8738133
27 24 0 3 +DATA/cpemsdb/data_d-cpemsdb_ts-goldengate_fno-383 11/27/2020 21:25:49 9532509
27 25 0 2 +DATA/cpemsdb/data_d-cpemsdb_ts-data_ccom_fno-176 11/27/2020 21:25:47 10485760
7682 7166 0 516 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_out_fno-608 11/27/2020 21:25:38 5028451
7682 7165 0 517 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_s_fno-673 11/27/2020 21:25:38 5048151
7682 7165 0 517 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_e_fno-500 11/27/2020 21:25:37 5003457
7682 7166 0 516 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_a_fno-398 11/27/2020 21:25:37 4974866
7682 7166 0 516 +DATA/cpemsdb/data_d-cpemsdb_ts-data_p_fno-313 11/27/2020 21:25:32 4954207
7682 7165 0 517 +DATA/cpemsdb/data_d-cpemsdb_ts-data_out_fno-257 11/27/2020 21:25:28 4933720
7682 7166 0 516 +DATA/cpemsdb/data_d-cpemsdb_ts-data_man_fno-200 11/27/2020 21:25:23 4915650
7682 7167 0 515 +DATA/cpemsdb/data_d-cpemsdb_ts-arc_r_data_fno-36 11/27/2020 21:25:10 4895479
52 46 0 6 +DATA/cpemsdb/data_d-cpemsdb_ts-veritas_apm_fno-728 11/27/2020 21:17:45 5991863
27 26 0 1 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_tzxx_s3_fno-686 11/27/2020 21:17:34 4766255
7 7 0 0 +DATA/cpemsdb/data_d-cpemsdb_ts-h_tzxx_s3_fno-394 11/27/2020 21:17:21 3495253
7682 7165 0 517 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_s_fno-672 11/27/2020 21:17:16 4987189
7682 7164 0 518 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_out_fno-607 11/27/2020 21:17:13 4975634
7682 7166 0 516 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_e_fno-499 11/27/2020 21:17:11 4964132
7682 7164 0 518 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_a_fno-397 11/27/2020 21:17:08 4949640
7682 7165 0 517 +DATA/cpemsdb/data_d-cpemsdb_ts-data_p_fno-312 11/27/2020 21:16:57 4943563
7682 7166 0 516 +DATA/cpemsdb/data_d-cpemsdb_ts-data_out_fno-256 11/27/2020 21:16:41 4941288
7682 7165 0 517 +DATA/cpemsdb/data_d-cpemsdb_ts-data_man_fno-199 11/27/2020 21:16:24 4943563
7682 7165 0 517 +DATA/cpemsdb/data_d-cpemsdb_ts-arc_r_data_fno-35 11/27/2020 21:15:53 4946599
82542 76436 0 6106 11/27/2020 19:51:17 32569209
7682 6928 0 754 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_out_fno-778 11/27/2020 19:51:17 18449172
7682 6966 0 716 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_out_fno-777 11/27/2020 19:49:28 19311903
82542 76530 0 6012 11/27/2020 19:49:28 32480564
5122 4761 0 361 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_arc_fno-436 11/27/2020 19:47:09 14569089
5122 4760 0 362 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_arc_fno-432 11/27/2020 19:45:39 15101854
4098 3814 0 284 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_d_fno-488 11/27/2020 19:45:28 12678870
4098 3815 0 283 +DATA/cpemsdb/data_d-cpemsdb_ts-data_a_fno-88 11/27/2020 19:45:26 12576771
4098 3821 0 277 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_arc_fno-472 11/27/2020 19:43:57 13144506
4098 3818 0 280 +DATA/cpemsdb/data_d-cpemsdb_ts-data_a_fno-87 11/27/2020 19:43:56 13114404
82542 76571 0 5971 11/27/2020 19:34:44 34954087
7682 6911 0 771 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_man_fno-775 11/27/2020 19:34:44 18826566
5122 4755 0 367 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_arc_fno-431 11/27/2020 19:31:15 14648592
4098 3816 0 282 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_arc_fno-471 11/27/2020 19:29:36 12956161
4098 3818 0 280 +DATA/cpemsdb/data_d-cpemsdb_ts-data_a_fno-84 11/27/2020 19:29:36 12707004
82589 76631 0 5958 11/27/2020 19:26:49 35341656
7554 7014 0 540 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_out_fno-611 11/27/2020 19:26:49 10795973
5618 5230 0 388 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_e_fno-516 11/27/2020 19:24:24 8581133
52 48 0 4 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_pay_fno-661 11/27/2020 19:23:48 3883615
27 25 0 2 +DATA/cpemsdb/data_d-cpemsdb_ts-idx_sjics_fno-679 11/27/2020 19:23:35 6990507
7 7 0 0 +DATA/cpemsdb/data_d-cpemsdb_ts-h_tzxx_s3_fno-393 11/27/2020 19:23:25 5242880