Oracle Transactional Business Intelligence

Products Banner

Any advice on version control for OTBI reports?

Received Response


Is there a good way to deploy some sort of version control within OTBI?

Content (required):

We work with a third party and also make updates internally and sometimes we are stepping on each other's toes. We can over course control access, but there are multiple people who should be able to modify reports. We can also control via our service desk, but would prefer a solution closer to the actual report.

Version (include the version you are using, if applicable):


Code Snippet (add any code snippets that support your topic, if applicable):


  • Nathan CCC
    Nathan CCC ✭✭✭✭✭

    Hi. From our development environment we extract folders and files as .catalog files from the catalog then store these in a version control repository (like GIT but other brands are available!). Then we unarchive to the target test/production environments from source control. So you get multiple versions over time with some control on what was release when by whom for what reason.

    But because files are binary it is not great for branch and merge for hot fixes etc - for this to work you might consider going the extra step to store the XML for each of your analysis in source control.

    For this to work well you need the different people and teams to have a good understanding of the folder structures you have created to make sure everyone knows who is responsible for releasing which files from which folders.

    This does not solve the problem of people making changes direct in the target test/production environment which are then lost the next time you release from development. To solve that we have made it impossible (unless admin user). So we changed the permissions on the folders to stop biauthor having write privilege in test/production on the folders we wish to release from version control. This means you can only update by doing a release. We have some other folders that are not locked down for end users to create content in prod that we never release to.

    Regards, Nathan

  • Rob F
    Rob F ✭✭✭

    We don't have source control right now so I feel your struggles. I've been looking at it and the best I've come up with is extracting the XML for analyses and/or archive files as suggested by Nathan. I'm not sure if there's a better way to use source control on other objects. (filters, dashboard prompts, bi publisher layouts etc)

    As with anything, it would rely on the vigilance of the report developers as nothing stops you from making the change directly on application.

    I'd love to hear if anyone else has suggestions that I can learn from too.

  • Bhaskar Konar
    Bhaskar Konar ✭✭✭✭✭

    As a SaaS customer we don't have much control for customization, release etc. and Nathan has already described the process and that's probably most of us are following.

    I wanted to add few more bits like configuring the Audit / Usage Tracking Reports etc. which will provide another way of looking into components if there are any changes then who did it and when it has been done etc.

    Scheduling a backup process & Report deployment process internally which will help to prevent any disaster in future. Strictly monitoring the Access & Usage of BI Admin Role.

    Taking XML individually is a big & time consuming task, would rather prefer to take the 'Archive' of Catalog so that all the components will be coming together. It can be done component wise or can be done as a whole like folder wise.

    If someone is lucky enough to have DWH / OBIA / on-premise applications then they can do much more compare to SaaS customer like automated backup from BI Servers, automated Version Control, automated deployment of RPD, web catalog etc.

    Hope this help.


  • @Nathan CCC @Bhaskar Konar-83539 @Rob F

    When you guys talk about storing XML of each data model in git for version control, do you guys mean the XML inside the folder which we get when we download using the XMLPSERVER ExternalReportService api ?

  • Rob F
    Rob F ✭✭✭

    Hi @sjain26

    I've made some strides since March when we were first looking at this. I store the XML files from OTBI analyses. It's handy to have a flat XML file to put up in source control. Data models from BI publisher become a mix of flat SQL files for each dataset/List of values based on a query and markdown files for documentation

    Markdown files is the best I've come up with for storing the changes to objects that don't have easy code files to pull out. It still requires vigilance from me to document everything and it's far from perfect but it's what I've done.

    I haven't tried pulling files down using APIs. I'm always open to finding better ways for source control.

    Kind Regards


  • @Rob F

    ok got it so you are storing the XML under Advanced section behind an OTBI dashboard analysis in git. Are you copy pasting the XML from that window or using some API to dnld it ?

    Are you then using some tool to help with deployment of that dashboard from dev to test or prod fusion instance? Or its mostly used for versioning and then the deployment to new instance is done by copy pasting xml back into new instance?

    When it comes to markdown files for documentation of BIP changes? So you are entering info on what was added or changed in a query behind a data model?

    Using any automation here to deploy bip files?

    I am trying to explore how i can store just the raw code/sql code for data models in git and use some other tool to actually create a data model and load to target env.(this will allow for doing code rview on git, see changes to query compared to prev version etc.) But it seems like this is impossible outside of Fusion. Seems like only way to do little automation is just to download the .xdm or ,catalog file via an api and store it as binary in git and load that to a tgt env. In which case code review cannot be done on Git.

  • Rob F
    Rob F ✭✭✭

    Hi @sjain26 ,

    As mentioned, I haven't pulled down files through APIs.

    I think I've seen tools you can pay for that might handle some of this automation for promotions. I unfortunately don't have the time or resources to get them. I'm sure google is your friend there.

    For Data Models in the catalog, I've been using a folder structure to keep SQL files for the data sets or lists of values based on SQL queries. Markdown files are my backup when it doesn't fit easily as code so I'm describing parameters in a Markdown file, manual lists for LOV are markdown files as well.

    Promotions between environments are done as regular with catalog files organizing objects into one promotion file and archiving the production objects being replaced/removed from the production environment. These catalog files get stored in our story management tool for reference.

    Unlike Nathan's description above, I opted out of using source control on catalog files as I don't expect to version these catalog files. With source control not being able to read the binary changes in a meaningful way, I didn't find utility in cluttering source control with these changes.

    Nothing is automated, I have to rely on vigilance and understanding the folder structure to track changes. It's similar to what Nathan suggested above but not all the same. I know my solution is far from perfect but it's the best I have with the time and resources available to me.

    I hope this helps.

    Kind Regards