Hi all,
We have installed weblogic with coherence and in our weblogic.xml we have defined
<session-descriptor>
<persistent-store-type>coherence-web</persistent-store-type>
</session-descriptor>
We have not changed any values, all of properties are using default coherence values.
One of our feature is saving file(BLOB) to the database and since that's 2 step process, that BLOB remains in session memory.
For smaller objects it works fine but once total object size reaches 60-70 mb, coherence started giving error like below.
verity-value: 64] [rid: 0:1] [partition-id: 0] [partition-name: DOMAIN] > <BEA-310002> <68% of the total memory in the server is free.>
####<Jan 3, 2020 4:55:49,864 PM UTC> <Warning> <com.oracle.coherence> <Logger@9229206 12.2.1.3.0> <<anonymous>> <> <93d0fe72-b0bc-487f-86b7-654cc94baaf8-00000001> <1578070549864> <[severity-value: 16] [rid: 0:9] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <2020-01-03 16:55:49.864/935285.533 Oracle Coherence GE 12.2.1.3.0 <Warning> (thread=oracle.coherence.web:DistributedSessionsWorker:0x0000:1330, member=3): Partial commit due to the backing map exception com.tangosol.internal.util.HeuristicCommitException
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.putAllPrimaryResource(PartitionedCache.CDB:11)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.postPutAll(PartitionedCache.CDB:27)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.putAll(PartitionedCache.CDB:28)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onPutAllRequest(PartitionedCache.CDB:99)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$PutAllRequest.run(PartitionedCache.CDB:1)
at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:1)
at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService$DaemonPool$WrapperTask.run(PartitionedService.CDB:1)
at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:66)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:54)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: Maximum value size is 67108863
at com.tangosol.io.journal.FlashJournalRM$JournalFile.enqueue(FlashJournalRM.java:1519)
at com.tangosol.io.journal.AbstractJournalRM$JournalImpl.write(AbstractJournalRM.java:1519)
at com.tangosol.io.journal.RamJournalRM$JournalFile.enqueue(RamJournalRM.java:1101)
at com.tangosol.io.journal.AbstractJournalRM$JournalImpl.write(AbstractJournalRM.java:1519)
at com.tangosol.io.journal.JournalBinaryStore.store(JournalBinaryStore.java:107)
at com.tangosol.net.cache.CompactSerializationCache.put(CompactSerializationCache.java:450)
at com.tangosol.net.cache.CompactSerializationCache.put(CompactSerializationCache.java:412)
at com.tangosol.util.AbstractKeyBasedMap.putAll(AbstractKeyBasedMap.java:189)
at com.tangosol.net.partition.PartitionSplittingBackingMap.putAllInternal(PartitionSplittingBackingMap.java:434)
at com.tangosol.net.partition.ObservableSplittingBackingCache$CapacityAwareMap.putAllInternal(ObservableSplittingBackingCache.java:1064)
at com.tangosol.net.partition.PartitionSplittingBackingMap.putAll(PartitionSplittingBackingMap.java:169)
at com.tangosol.util.WrapperObservableMap.putAll(WrapperObservableMap.java:185)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.putAllPrimaryResource(PartitionedCache.CDB:7)
... 10 more
I did try to set flash general block size in tangosol-coherence.xml, but it's max value is 64 Mb and it didn't help.
I was thinking about using <external-scheme> ( https://docs.oracle.com/middleware/1212/coherence/COHDG/cache_examples.htm#COHDG383 )
<external-scheme>
<scheme-name>SampleDiskScheme</scheme-name>
<bdb-store-manager/>
</external-scheme>
But using that actually made it worst
(thread=DistributedCache:oracle.coherence.web:DistributedSessions, member=3): java.lang.IllegalArgumentException: Maximum value size is 524288
I added external scheme to distributed scheme like this.
<!-- partitioned caching scheme for servers -->
<distributed-scheme>
<scheme-name>server</scheme-name>
<service-name>PartitionedCache</service-name>
<local-storage system-property="coherence.distributed.localstorage">true</local-storage>
<backing-map-scheme>
<!-- <local-scheme>
<high-units>{back-limit-bytes 0B}</high-units>
</local-scheme> -->
<external-scheme>
<scheme-name>SampleDiskScheme</scheme-name>
<bdb-store-manager/>
</external-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
I am I am not sure if i am doing this right.
Is there any other way around it ?
Any way to use disk (external-scheme) so that i can use more than 64 mb ?
Please help.
Thanks