A multi-server application might require locked caching, where only one ATG instance at a time has write access to the cached data of a given item type. You can use locked caching to prevent multiple servers from trying to update the same item simultaneously—for example, Commerce order items, which can be updated by customers on an external-facing server and by customer service agents on an internal-facing server. By restricting write access, locked caching ensures a consistent view of cached data among all ATG instances.
Locked caching has the following prerequisites:
Item descriptors that specify locked caching must disable query caching by setting their query-cache-size attribute to 0.
A repository with item descriptors that use locked caching must be configured to use a ClientLockManager component; otherwise, caching is disabled for those item descriptors. The repository’s lockManager property is set to a component of type atg.service.lockmanager.ClientLockManager.
At least one ClientLockManager on each ATG instance where repositories participate in locked caching must be configured to use a ServerLockManager.
A ServerLockManager component must be configured to manage the locks among participating ATG instances.
You should also configure one or more ATG servers to start the /atg/dynamo/service/ServerLockManager on application startup. To do this, add the ServerLockManager to the initialServices property of /atg/dynamo/service/Initial in the server-specific configuration layer for the server in which you’ve chosen to run a ServerLockManager. For example, if you wanted to run the ServerLockManager in a ATG server instance named derrida, you could add this properties file at
You can configure more than one ServerLockManager. One ServerLockManager acts as the primary lock server while the other acts as backup. If the primary ServerLockManager fails, then the backup ServerLockManager takes over and clients will begin to send lock requests to the backup. If both ServerLockManagers fail, caching is simply disabled. Under that condition, the site still functions, but just slower since it must access the database more frequently rather than using the cache. The cache mode also switches into disabled mode for all transactions that are unable to obtain the lock. Once a ServerLockManager is restored, caching resumes.
For example, if you have two ServerLockManager components named tartini and corelli, each running on port 9010, they could be configured like this:
It is best if the primary ServerLockManager runs in a ATG instance that does not also handle user sessions by running a DrpServer. Not only does this prevent the load on the ServerLockManager from affecting user sessions, but it also lets you stop and restart the DrpServer without restarting the ServerLockManager. If you find that there is enough lock contention on your site that the lock server itself becomes a bottleneck, then you might choose to create separate lock servers for different repositories to distribute the load. Note that in this situation, the lock server will be unable to detect deadlocks that span lock servers. In this situation, you will need a separate ClientLockManager instance in each ATG instance to refer to each ServerLockManager.
For each SQL repository that contains any item descriptors with cache-mode="locked", you must set the lockManager property of the Repository component to refer to a ClientLockManager. ATG comes configured with a default client lock manager, which you can use for most purposes:
When you first install the ATG platform, the ClientLockManager component has its useLockServer property set to false, which disables use of the lock server. In order to use locked mode repository caching, you must set this property to true. This setting is included in the ATG platform liveconfig configuration layer, so you can set the useLockServer property by adding the liveconfig configuration layer to the environment for all your ATG servers. You must also set the lockServerPort and lockServerAddress properties to match the port and host of your ServerLockManagers components. For example, suppose you have two ServerLockManagers, one running on host tartini and port 9010 and the other running on host corelli and port 9010. You would configure the ClientLockManager like this: