This content has been marked as final. Show 12 replies
Any solution that involves disabling a constraint needs a different solution. Constraints are part of the schema, and are there for a reason.
The "step 4" should be step 1, find what piece of data violates a constraint- correct the data, then load it.
Data integrity is crucial for application sanity- "data scrubbing" is a tedious process but turning off constraints to work around bad data is not a good solution. When the constraint is disabled other updates can violate the constraint and make the "step 6" a real chore when its time to enable the constraint.
This is the problem: the step where you said "find what piece of data violates a constraint- correct the data, then load it." - there is not data that violates the constraint, the data is correct!!
But, not all the statement of the transaction are executed in the transaction, thats why the constraint is violated, but the transaction is fine.
The disabling the constraint and enabling again is being done meanwhile to move the transaction from the EQ to the DB, until we find the reason -- but we know this is not the solution
first of all which olite version are you using?i remember we had a problem (not similar as yours) but once this happened the error queu got some entries and mgp would not process any new in queue items until eq got free, this was fixed with a latest patch and now mgp continues to process items even if that eq entrie comes. is this the reason that the deleted items are not deleted ( the marked d ones)?
concerning the constraint, i have the impression that on the client the constraints are not downloaded. is it the case that you on the client first insert the second row (child) and then the parent so when you sync olite first enters the child (you get a constraint violation since the parent row is missing) and then it enters the parent row?
moreover there is the case that since it is a transaction it might happen that , the parent row is inserted, the child row is inserted, there you get a fk violation because the parent row is missing (it is inserted but since we are in a transaction it is not really inserted yet, it is in a middle state) let me give you an example on that. if i make a transaction and i insert a row a1 and i dont commit , and then i open a new connection to the database and search that row a1 i will not find it because it is not there, i have to commit the first trasaction for the row a1 to be visible from the second connection, am i right on that?if this is the case then perhaps your constraint is false?
Edited by: vasileios on 12 Σεπ 2011 11:56 μμ
Hi vasileois, thank you for your post.
The Olite version we are using is 10.3.0.2.0.
The error has already happened many times in both cases, with or without entries in the EQ, so it has nothing to do with your situation.
Concerning the constraint, we do not have any constraint in the client, only tables (publication items), sequences, indexes and synonyms, but no Foreign keys. The constraint integrity is controlled by the GUI application and it is suppose to validate the data exactly like in the DB - and it does, because the transaction is correct and the data integrity is perfect.
In the central DB, all our constraints are deffered - this means that the order of insertion does not matter to us - it is exactly the same for us if the child row is inserted first or the parent row because the integrity is validated at the end of the transaction when commiting.
I suspect there is something to see with the Version$$ column that can be seen in the EQ tables, because usually i see the field set with value 1, but at times i see other completely different values, o sometimes NULL. The wrong value(s) appear only in the record(s) that are not executed. What is this column exactly for ???
What are the priority settings for your publication items? do not see this mentioned in the post, but the priority order needs to be correct for parent child relationship tables, and be aware that they work in reverse for updates and deletes,
If you are getting uploads containg deletes of both parent and child items, the child PI must be higher than the parent
Hi Gary, thank you for your post.
The priorities are correct. The child table's weight is 99 and the parent table's weight is 9, both on a "Client Wins" rule.
Anyways, the constraint between both tables is established as "deferrable initally deferred", so i guess the table's weight does not affect, as anyway the commit is not done until the full transaction is not applied. I am wrong ?
rollback and commit can be a bit odd. updates to the base application tables get rolled back, but data is copied from in to error queues. even more complex if you have any callbacks, queue based processing or any other cusomisation as these will be coded as you want them to be.
I presume you are using fast refresh. For a failed upload transaaction, what is in the mobileadmin C$EQ table? Should be a record for each record, the one triggering the failure will have a message text. parent and child publication items should have the same tranid$$, and the seqno$$ for the child records should be lower than the parent ones. the seqno$$ is based on a calculation taking into account the priority, dmltype and other things.
CFM$ tables for the child and parent should have the same seqno$$ values and DMLTYPE$$ of D if you are really deleting
do you have any table triggers on the base tables in your application shema? we have triggers that change physical deletes to logical on some tables. Do you have any customidation code? we use queue based PIs almost exclusively and these ignore dmltype$$ of D for most tables, other callbacks can do the same thing
have you set the server logging on - this should show the actual sequence of transactions
All your points listed are correct in the system.
* Number of rows in CMF$ tables = number of records modified (1 for each statement)
* The message text appears in a record of a completely different table (the error message is not set in the actual row of the error... is this important?)
* Parent and child publication items have got the same tranid$$
* The seqno$$ for the child record is lower than the parent record
BUT, I have noticed that the CFM$ tables are completely empty for all the publication items involved in the transaction: i dont know if this is correct.
Regarding the triggers question, we do have triggers on the tables in the application schemas, but only regarding consistencies and the data - we do not convert physical deletes to logical ones in our system.
In the two tables of this problem, there is no trigger being fired on a before/after delete event.
Data initially gets uploaded into the CFM$ tables with 'header' records in c$in_messages, but if an error happens then the 'header' info gets copied to c$eq in the mobileadmin schema and the data records to the CEQ$ tables in the application schema (to allow access to change the data to correct problems without granting update access to the mobileadmin schema). When you run the resubmit, the data is copied back to the CFM$ tables for the next MGP cycle to pick up.
You mention the message text appears against a record relating to a different table - i assume by this you mean the message_text in c$eq. The record that has the message is the actual record the MGP process was attempting to process when it hit an error. For any transaction there is only one record with a message_text as the processing abort on the first error and does not process any further. Bit of a pain as you end up sometimes having to fix something, resubmit and see another failure
Is there any reason the processing of the data related to the record with the error message could have an error? you can get the actual data record by using the tranid$$ and seqno$$ from the CEQ$ table for that publication item. As your setup sounds to be correct in terms of the managing of the parent/child relationships sounds ok, then you may just be getting an odd message from a different problem. The record with the message_text is definitely the insert/update/delete that is failing.
NOTE if the uploaded dmltype$$ is I or U, then this can get overridden by the apply process. type I will be changed to an update if the PK already exists in the application table, and similary type U will change to an insert if the PK does not exist, this can sometimes cause problems
Thanks for your post. I will definetely check if the record with the error message has any error or not, but the error message has got absolutely nothing to do with the processed record. (Maybe some other record that cannot be seen?)
I currently do not have any error stored in the Error Queue and cannot check now the exact record where the error appears. As soon as it appears, i will check that.
The uploaded dmltype$$ for the erroneous record is D (both records in parent and child tables), and whenever the error has appeared it has always been with this kind of DML. Does that tell you anything ?
Today another error appeared with this issue (again).
As you told me, i checked the record where the error message appears and yes, it has got nothing to do with the actual message error. To prove this, what i have done is to execute the Error Queue transaction several times to check if the error message was always stored in the same record of the transaction. These are the results:
1) EQ appeared: Error message stored in TableChild2 - record with dmltype$$ D - this table is also child of the parent table - the record where the error message appeared is related to the record to be deleted from the parent table.
2) I executed again the transaction without modifying any data: the error message was the same, but it was stored in a completely different table & record !!!
This time it was stored in the parent table but with another record of dmltype$$ I.
3) I executed again for the 3rd time the transaction without modifying any data: the error message was again the same, but this time it was stored again in TableChild2 (just like step 1), but in a new record of dmltype$$ I.
I assume by all this that the record where the error is stored has got nothing to do with the error itself - probably because our constraints are of type "Deferred".
There is another point: The parent table's weight is 9, but the child table's weight is 99. In the publication there are 181 items, and out of them 175 items's weight are maximum 12. But there are 6 items (one of them the child table of the error) where the weights are 99 or 100. Why could there be such a huge difference ????
priorities can be set to any number you want, they are not intendeded for absolute sequencing of the publication item processing, but rather relative priorities between the objects. We have about 85 PIs in our main app, but only priorites 1-5
main thing is to follow the parent child relationships and make sure child tables have higher priorites than parents, any complex relationships with dependant parents needs a bit of though in this area. Be careful of referential integrity maintained via triggers rather than FK constraints as these need to be treated as parent child relationships for the priorities, but are not as obvious.
The priority (plus other factors like the dmltype$$) determines the seqno$$ value within the tranid$$. The use differs between inserts and update/delete. seqno$$ for dmltype$$=I will generate ascending seqno$$ values (a table with priority 2 will always have a seqno$$ greater than a priority 1 table)
for dmltype$$ U and D, the opposite is however the case, higher priority items will have lower seqno$$ values than thier parents BUT the seqno$$ values will be higher than the inserts. The assumption is that you change the child records before you deal with the parents, for example deleting the child records before you delete the parent to prevent constraint problems.
Main issue with this is if you change key values on parent records they are applied after the child records and therefore cause constraint issues. as long as you reparent child records by inserting a new parent and then repointing the children, it is fine, but a big issue if you change PK values as the parents will not exist when the child updates are processed
Normally the client will force both a dmltype$$=I and a dmltype$$=D when you do a PK change, but if this is done on the server side you may have a problem
You mention you have deferrable constraints on your tables - not a good idea for mobile as it is more akin to the processing of online transactions, not batch.
Within the c$EQ table, the error message will appear against the record that was current in processing terms at the time an error happened, and only the first error is shown as processing stops at this point. If you have deferred constraints then when a record is inserted/updated no error will be raised, but the error will happen when the commit is done and the record being processsed will be different. As the order is different for different dmltype$$ values as above, this will explain why the eror moved in your case.
If you can try processing an error transaction whilst you have constraints enabled and not deferred as that should make the failure happen at the correct point