- 17.9K All Categories
- 3.4K Industry Applications
- 3.3K Intelligent Advisor
- 63 Insurance
- 535.7K On-Premises Infrastructure
- 138.1K Analytics Software
- 38.6K Application Development Software
- 5.6K Cloud Platform
- 109.3K Database Software
- 17.5K Enterprise Manager
- 8.8K Hardware
- 71K Infrastructure Software
- 105.2K Integration
- 41.5K Security Software
RMAN restore extremely slow reading from NetApp mountpoint
We've been running an RMAN restore (v. 11.2) that reads backupset from a filesystem made upon an NFS-mounted Netapp appliance drive.
All statistics show us (for ex. column V$BACKUP_ASYNC_IO.EFFECTIVE_BYTES_PER_SECOND) that RMAN has been reading in average 3 to 4MB/sec for more than 48 hours now (and it keeps opening backupsets of around 32GB each, reading them etc. - so far 640GB).
Backupset files read are then copied to another fs with same characteristics as the disk where backups are read from.
Funnily enough when doing an 'rsync' from same source fs to same other Netapp destination fs I find I have 97.38MB/s of copy speed (whereas iotop on RMAN process ID working shows 4.67 M/s only !)...
We have no idea where this slowness comes from: where would start your investigations ? (we haven't so far received much help from other teams, but the SAN's, but it could come from Network, some card config somewhere... - no idea).
In advance, thanks.