This content has been marked as final. Show 13 replies
You should have all your objects in "undefiled" context unless it is used at same connection in all environments.
ODI always "respect" the lower context declared then what you need to do is change all to undefiled and at moment of launching, choose the context that you wish.
Just to make sure: I'm talking about the optimization context on the definition tab, not the context that the tables are associated with (setting in the diagram tab --> this is set to execution context).
For the optimization context I have to select one of my contexts, it cannot be undefined - well, it can be set to undefined (this is an option in the drop down list when I create a new interface), but after it has been set to a specific context, undefined is not available in the drop down for optimization context anymore.
Am I seeing this correctly? What options do I have to set it back to undefined?
Well, in this case you need to change manually.
ODI architecture is build to have just one Development Work Repository (DWR) and how much Execution Work Repository (EWR) as you wish.
When you use 2 or more DWR's you can have really serious problems after made some change in both DWR's and, after the changes, try to import one into other.
This happens because of the internal objects ID that is controlled by each DWR alone.
Because of that I think that you aren't in the best way to your environment.
For your question, the answer is yes, you need to update one by one in this kind of environment you're using.
Make any sense?
Makes perfect sense, thanks for confirming, that's exactly what I wanted to know.
And yes, there are serious problems with importing new objects. So you suggest to set up any repository other than the actual DEV repository as an Execution Repository? Based on the issues I've experienced, I agree. The only thing that I'm wondering about is that this is different from the information in the Best Practices for DWH guide (p.25):
"When the Development team finishes developing certain projects, it
exports them into versions in the unique Master Repository. _A Test team imports
these released versions for testing them in a separate Work Repository_, thus
allowing the development team to continue working on the next versions. When
the Test team successfully validates the developments, the Production team _then
imports the executable versions_ of the developed projects (called scenarios) into the
final production work repository."
I know you're very experienced and I absolutely agree with what you're saying - just wondering if I'm interpreting this passage wrong, it sounds like the Test repository should/can be a Work Repository as well instead of an execution repository - and it doesn't warn about the potential issues during import. The reason why a work repository for Test would be helpful is if the underlying tables etc. in Test/PROD require slight modifications from the Dev environment.
I believe that when they said "_A Test team imports these released versions for testing them in a separate Work Repository_, " they forgotten to say "Execution WR"...
Take a look into this post and tell me what do you think: http://odiexperts.com/?p=574
By experience, I can defend any other place to do any altering then the initial Development environment. I mean, if the test team found some problem, it should inform to the Development team that will correct it and generates a new version to test. In my opinion, the test should only report, never alter.
What do you think?
My two cents --
1. Optimization Context is nothing but a context which is used when designing the interfaces (the code itself) as you may require a DB connection to perform things like checking syntaxes , viewing data etc.
2. When the "development" phase is complete its obvious that you have no need for the optimization context anymore as you are going to generate the executable out of this working code and test it out in any environment by running it against the "execute" context (considering the ideal world).
3. So when you migrate out of the "development" work repository you are exporting only executables (scenarios) to other environments.
4. If the testing phase fails , the changes/resolution will be made in the dev work rep and scenarios will be generated and migrated again. This makes sure that you are following the whole cycle and not taking shortcuts by making fixes in any other environments other than the development work rep.
Usually the non-development environments are chosen to be "execution-only" work repository and the above process will fall in place accordingly.
I hope this helps a little...
I think this is a noob questions.
But Consider this scenario
Where we have three different entities
OLTP schema, DW Schema and STAGING schema
Development: In dev due to lack of budget we have only one box and we have all the three schema in one box. So this translates to One physical server with three physical schema
Test: Same scenario as above
Production: We have two server one for OLTP and One for Staging and DW. Here we have two physical servers but only so the loading computed against development infrastructure is different than Development. Like here we will have to use LKM to load from OLTP to Staging schema.
But in DEV we can eliminate the loading the requirement.
Does ODI expect the architecture like how the staging source and target server are spread around (physically) to the same in DEV TEST and PROD..?
Mind you here we can share one master rep still map logical schema + context to different physical sachem