Sorry for my english.
I have a publication with publication items of read-only and other publication items of read-write.
The process synchronization can cause blockages?
Sometimes I found a file of the form:
when you synchronize the modified database pages are backed up in a filename.plg before they are actually written in the normal database filename.odb.
if an error occured at the commit, the transaction should be rolled back in the next connect. you can set the (-pagelog i think do disable this, but its not adviced because you might end up with a corrupted db).
so the plg file doesnt have to do anything with the pub item read only or not. for some info you can check the help file or at the oracle support articles and threads like:
"COMTHREAD : How to manage the size of tempory filename.plg during a syncrhonization ?"
do you get an error when you sync with read only and non read only pub items?
In the file POLITE we have the following parameters:
FLUSH_AFTER_WRITE = YES
OLITE_WRITE_VERIFY = YES
OLITE_READ_VERIFY = YES
OLITE_SQL_TRACES = YES
With the first parameter we write directly to disk after a COMMIT.
We do not understand why such file. In. We see no file error.
We think that the database client is being blocked because we have a publication with pub items read-only and read-write.
We found that we close all connections.
It may be blocked because some pub items upload and download information, and other items only download information?
We found that we close all connections.
according to oracle lite specs you should only have 1 connection to the client database at any given time. before you sync you must disconnect always any connection from your program.
We do not understand why such file.
according to the help file http://docs.oracle.com/cd/E12095_01/doc.10303/e12548/cdbtools.htm#CIHJDGGG
you could use
"pagelog By default, a commit backs up modified database pages to filename.plg before actually writing the changes to filename.odb. If an application or the operating system experiences a failure during a commit, the transaction is cleanly rolled back during the next connect. If -pagelog is specified, no backup is created and the database can become corrupted if a failure occurs."
but im not advicing you to do this.i dont even know if it works in all type of clients, and if an error happens you could end up with a corrupt database.whe have seen this type of files in our wince clients especially when they connect through gprs which is a very slow connection and takes time to download any db changes. in some cases when the sync stopped unexcpectatly in the apply phase (because of battery or something) we had to manually delete the plg files in order to force a full refresh.but i dont think that this is your case. which client you are using, win32,windows mobile etc?which olite version your using?
finally concerning the pub items , if you are using any type of foreign keys which im advicing you not to, you could set the weight of each pub item in order to mark the row in the sync.
"Weight—The publication item weight is used to control the order in processing publication items, which avoids conflicts. Changes made on the client are processed according to weight in order to prevent conflicts, such as foreign key violations. The weight determines what tables are applied to the enterprise database first. For example, the scott.emp table has a foreign key constraint to the scott.dept table. If a new department number is added to the dept table and a new record utilizing the new department number were added to the emp table, then the transaction would be placed in the error queue if the new record utilizing the new department in the emp table was applied to the repository before the new department in the dept table was applied. To prevent the violation of the foreign key constraint on the enterprise server, you set the dept snapshot to a weight of 1 and the emp snapshot to a weight of 2, which applies all updates to the dept table prior to any updates to the emp table as the lower weight is always processed first."