Discussions
Categories
- 196.9K All Categories
- 2.2K Data
- 239 Big Data Appliance
- 1.9K Data Science
- 450.4K Databases
- 221.7K General Database Discussions
- 3.8K Java and JavaScript in the Database
- 31 Multilingual Engine
- 550 MySQL Community Space
- 478 NoSQL Database
- 7.9K Oracle Database Express Edition (XE)
- 3K ORDS, SODA & JSON in the Database
- 546 SQLcl
- 4K SQL Developer Data Modeler
- 187.1K SQL & PL/SQL
- 21.3K SQL Developer
- 295.9K Development
- 17 Developer Projects
- 138 Programming Languages
- 292.6K Development Tools
- 107 DevOps
- 3.1K QA/Testing
- 646K Java
- 28 Java Learning Subscription
- 37K Database Connectivity
- 155 Java Community Process
- 105 Java 25
- 22.1K Java APIs
- 138.1K Java Development Tools
- 165.3K Java EE (Java Enterprise Edition)
- 18 Java Essentials
- 160 Java 8 Questions
- 86K Java Programming
- 80 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.3K Java SE
- 13.8K Java Security
- 204 Java User Groups
- 24 JavaScript - Nashorn
- Programs
- 443 LiveLabs
- 38 Workshops
- 10.2K Software
- 6.7K Berkeley DB Family
- 3.5K JHeadstart
- 5.7K Other Languages
- 2.3K Chinese
- 171 Deutsche Oracle Community
- 1.1K Español
- 1.9K Japanese
- 232 Portuguese
RMAN DUPLICATE from DC to Azure: network performance

When running an RMAN DUPLICATE FOR STANDY FROM ACTIVE for an 800GB 19c database that sits in our datacenter to a new 19c database in an Azure VM, it takes too long (my SSH session timed out after half-a day). The same thing between 2 machines sitting in our datacenter takes less than 2 hours (DOP 21).
I'm also using a DOP of 21, but the issue lies not with CPU power but with the network. The main wait event while that RMAN DUPLICATE FOR STANDY FROM ACTIVE on the source is "SQL*Net more data to client":
While on the target database (which is not in OEM yet), it's "remote db file read"
So, 3 questions:
- to mitigate the "remote db file read", would it help to use FROM ACTIVE DATABASE USING COMPRESSSED BACKUPSET?
- to mitigate the "more data to client", how can I configure RMAN to send bigger chunks of data?
- how do Azure users set up Dataguard (primary in datacenter, standbys in Azure)?
Answers
-
The same thing between 2 machines sitting in our datacenter takes less than 2 hours
2hrs local vs 6+ hours remote? Sounds about right.
I think your bottleneck is the internet itself
- What is the bandwidth of your internet connection that the DB uses?
- is it dedicated for this purpose? Or is it shared?
- How much bandwidth are you buying with your cloud solution?
-
I always use compressed backup sets. If you have the Adv Compression licence, choose the highest algorithm there is. However, even basic compression will reduce the size and therefore transfer time significantly. At the moment, you are copying whole image copies of datafiles across.
-
Hello Frank!
Two questions:
- Do you have Express Route set up between your DC and your Azure environment?
- What is the type of storage that you're using to receive the backups on the Azure side?
Thanks,
Kellyn Gorman
Oracle SME on Azure, Microsoft
-
Thank you all for your help. I was able to get the answer for 1 of your questions only, but will get the rest soon. To Kellyn's question "What is the type of storage that you're using to receive the backups on the Azure side" => I found out that the disks behind our ASM diskgroups on our Azure VMs are of the "Premium SS" type: https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssds
of the P30, P40 and P50 models, mostly P50 ("IOPs per disk: 7500", "Throughput per disk: 250MB/sec") and P30 (IOPs per disk: 5000", "Throughput per disk: 200MB/sec).
We're a big Azure customer so I expect we have Express Route, but I'll get back to you with details.
-
I didn't manage to get answers to all your questions. Here's the progress I've made:
I got confirmation that we have Express Route
I used "from active "USING COMPRESSSED BACKUPSET" for my RMAN DUPLICATE and saw significant improvement: 2-3 hours (record time: 1H44mn) to lift&shift about 800GB.
The main wait event is still "SQL*Net more data to client" so I'm still wondering if there's a way to have RMAN ship bigger chunks of data at a time from on-premise to Azure.