Performance improvement when there are a large number of mappings in Data Management
Summary: According to business needs, PBCS need to extract TB data from Fusion GL. The mapping between cost center codes in Fusion GL and the store codes in PBCS need to be created in the Data Management. About 30000+ code mappings need to be created, including one-to-one mapping and multiple-to-one mapping, all the source and target codes are not the same (e.g. one-to-one mapping: map 1010001 to 19830001, multiple-to-one mapping: map 1010002, 1010003, 1010004 to 19830002). However, such a large number of mappings in Data Management will affect the performance of data extraction. Is there any way to improve the performance?
Content (please ensure you mask any confidential information):