Where do you see CPU utilization % is 3 to 5 % ?
On the database server ?
On the server on which ODI Studio is installed ?
On the server on which ODI Agent is installed ?
For a mapping, ODI Agent will generate the SQL scripts, but the execution is done in the database.
It is on database server.
From ODI you can't setup the CPU utilization for database server.
Hence raise this question to Oracle DB community.
If you have I/O bandwidth to spare: go parallel at the database level. (Your DBA will be able to tell you if unsure)
The idea is to raise the degree of parallelism in steps until you just about use up all the bandwidth you have available.
Assuming an Oracle database: start small, setting a degree of three or four using hints in your code.
In theory AutoDOP should have the database manage paralllelism for you, but I've never gotten that to work well. It tends to over-parallelize, using up lots of cpu without actually increasing throughput.
As my application is OLAP and not OLTP, DOP can offer benefits (though POC will require).
I understand your problem , if you would use the Auto DOP setting, it might be great for the first large statement you send in, as that will get a lot of resources. However, the next (or the one after that) that needs to be executed (while the first one still is running) will have way less resources available. This can all be configured of course, but that is not an easy task. The very first thing we need to know is our application. What statements will be executed, and when. Will they be executed at the same time or sequentially, etc.
Well, to use more DB CPU you need to considered 2 things:
If you job is running only in the DB side and not using the Agent:
Then you definitely want ODI to send parallel jobs and also use parallel hints in you jobs. Also, you need to calculate how many jobs it'll be running in parallel to set the parallel hint in a way that you don't over use it. If you do, it can be slower than if you don't use at all.
If the job you are running is using the agent to insert data in another DB.
Then you can either use a DBLink KM and continue with the previous considerations or, if DBLink is not possible, you need to setup the ODI topology, In the data server, the array fetch size and the batch update size to make sure you are gathering and sending the best amount of data possible between servers through the agent.
One tip here is that bigger is not always better, you need to try different values. I suggest increase a lot then decrease a lot and start to find the meddle ground.
Just see what executions is faster.
This way you'll going to maximize the CPU usage in both cases.