Best practice for processing burden rate changes for a large volume of data?
Our burden schedules are in effect for one year, so when we update the rates, items back to the beginning of the year will be reprocessed. A change to an often used rate, such as labor fringe or material handling, can mean that millions of rows are reprocessed. What do you suggest as best practice for rate changes? The steps include: updating the rates, compiling the schedule, distributing labor, distributing material, generating burden, running revenue, interfacing it all to GL, and running update project summary.