This content has been marked as final. Show 10 replies
You can put a varchar2(4000) in your target table, and put an ODI constraint on this field.
LENGTH(col1) <= 180
But setting this in place for all your columns would take ages :/.
I've got an idea, but this has to be done for each column of each SOURCE datastore...1 person found this helpful
You could create "ODI conditions" based on your datawarehouse columns.
Each condition say : "length( name_of_field ) <= XXX"
I assume you can set the XXX with the substitution method "getColumn()" and patterne "LONGC"
These condition will be only on STATIC control.
In your ODI package, before launching your interface, you can drop the source datastore and check these conditions with static control. All records that have a column whose real source lenght is not the same that the lenght defined in ODI will be rejected in E$. And, if you want, delete from the source (facultative).
You can also imagine an extra custom LKM step that scan all your datastore column (with getcol list method) and compare their length to the target datastore columns and reject the bad data. It will be more difficult to implement (100% custom code), but faster to propagate everywhere.
I think this solution cannot work in FLOW control, in interface.
Because the flow control occurs on I$ table.
And the I$ table is built with the Target table structure.
So, if there is a source column whose length it bigger than target column, the "insert flow" step will fail before the Flow Control occurs.
Therefore, I think we can only do it in static control, in a package , or in a custom LKM step.
Thank you all for your replies.
As you call all imagine, going through every column is not practical.
We are looking for a more rebust method, that is also general enought. Because even taking care of VARCHAR2 fields might be obvious, the same does not apply to numeric fields.
That's why I suggested to switch all varchar2(xx) columns to varchar2(4000).
So you don't have any error. But I don't say it's the best way to go... Actually I'm not sure there is a good way to do that.
If your columns have the same names in source and target, you can build a procedure to check in the dictionary if they have the same length.
You'll have to build a table to map the name of source and target tables together.
Or, another approach using the dictionary would be :
- Keep a copy of the columns as defined in the dictionary
- Before launching your flow, check that the actual columns in the dictionary are still the same as in your copy.
As I said in my first post, I suggest you to modify your LKM by creating a special step that will scan all source olumns used in the interface and compare their length to the target columns.
For that, use :
getColList() Method with pattern "COL_NAME" / "LONGC" and selector "MAP"
-> that will retrieves all your source column name and their length
getTargetColList() Method with pattern "COL_NAME" / "LONGC" and selector "MAP"
-> that will retrieves all your target column name and their length
Use the jython technology in order to do a real "programmation procedure" with cursor, loop and so on...
For each row that do not fit, move it in the E$ table of the source, or in a custom error table, or flag the source row (if you have a special field for that) then filter on it.
It won't be easy.
That's probably the cleanest solution :).
I read your first post too quickly.
Hi,1 person found this helpful
I would consider using Oracle error logging (that is if your work schema is Oracle) on C$ work table. You will need to add “Create Error Log” step to your LKM, but this is the only change I see. I did not use this on C$, but I did it on I$ table with IKM SQL Control Append. It was easy to implement and the performance was not bad.
Thank you all for your insights.
I was really halpefull to have all these suggestions for experts that know their job.