I have a task SDE_ORA_GLCOGSFact_Full has been running for 23 hours and only 8 million records have been affected so far. The throughput is very low which is 100 rows/sec. The performance is slow.
We are on 126.96.36.199 running on a linux platform. This is the first load out of the box, and have shrinked the initial extract date to get less load to 01/01/2010.
In the workflow session properties are, the commit interval is 10000. DTM is like 32000000. Default buffer block size is 1280000. line sequential buffer size is 1024. The only
thing I could think of is increase the commit interval. Is there anything else thats causing this tasks to be this slow.
There are a number of things...aside from commit interval. Did you check the Informatica Session logs to see where the time is being consumes (Read, Write, Transformation)? Check to see where the time is being spent. If its the the READ, you may have to add custom indexes on the Source System side. Since this is FULL, it will truncate and do a bulk load which is usually fast w the OOTB settings. I would start with the Session loads and then diagnose further.
Ahsan makes some good points. In addition, do you have access to Enterprise Manager? You can monitor the SQLs being run against the source and target (assuming both Oracle DBs) and debug from there. For example, you can take a look at the explain plans to identify if any optimizer hints will give you better performance.
1. This this the longest running task ? Using the DAC are you able to sort the tasks by the 'duration' - do you notice a consistent slow performance across all tasks that has run before this COGS task ?