Forum Stats

  • 3,768,738 Users
  • 2,252,843 Discussions
  • 7,874,704 Comments

Discussions

Concurrent Data Storage Example

535773
535773 Member Posts: 9
edited Sep 25, 2006 9:34PM in Berkeley DB Java Edition
Is there handy example on how to use Concurrent Data Storage?

I am witing an sample program, in which a process spawns a thread and writes to HashMap every 10 secs, and on other side multiple processes with multple threads which constantly reads the HashMap every 10 secs.

I was successful, in writing sample progams for writing and reading those HashMaps. But, on reading the HashMap, I don't get updated data. I have restart the reader process to read most current data.

On Writer, I am setting these flags:

envConfig.setTransactional(false);
envConfig.setLocking(false);
envConfig.setAllowCreate(true);

dbConfig.setTransactional(false);
dbConfig.setAllowCreate(true);

Also, I am calling env.sync(), after I do map.put(index, value) on HashMap.

On Reader, I am setting these flags:

envConfig.setTransactional(false);
envConfig.setLocking(false);
envConfig.setReadOnly(true);
dbConfig.setTransactional(false);
dbConfig.setReadOnly(true);

And I am calling map.get(index) to retrieve the value.

Do, I have to set any other flags? Or I am totally off the course.

Thanks

Comments

  • Greybird-Oracle
    Greybird-Oracle Member Posts: 2,690
    Hello,
    I am witing an sample program, in which a process
    spawns a thread and writes to HashMap every 10 secs,
    and on other side multiple processes with multple
    threads which constantly reads the HashMap every 10
    secs.
    When you say HashMap I assume you mean StoredMap (in com.sleepycat.collections), is that correct? If you are using a HashMap directly, you'll need to store the HashMap in the database.
    I was successful, in writing sample progams for
    writing and reading those HashMaps. But, on reading
    the HashMap, I don't get updated data. I have restart
    the reader process to read most current data.
    Berkeley DB JE may only be used in limited ways when multiple processes access the database directly. The reader processes see a snapshot of the data as of the time that the Environment is opened. In order for the reader process to see additional data, it must close the Environment and open it again. It is not necessary to restart the process, but it is necessary to close and re-open the Environment.

    This is described in the documentation here:

    http://www.sleepycat.com/jedocs/GettingStartedGuide/introduction.html#features

    "Moreover, JE allows multiple processes to access the same databases. However, in this configuration JE requires that there be no more than one process allowed to write to the database. Read-only processes are guaranteed a consistent, although potentially out of date, view of the stored data as of the time that the environment is opened."

    If it is practical for you to close and re-open the Environment in order to see changed data, then using a reader process that opens the Environment will work for you. If this is not practical, the solution is for you to send the read requests to the writer process, perhaps using a web service interface that you implement in your application. In other words, your writer process would perform all reads and writes to the Environment, and other processes would interact with the environment only by sending messages to the writer process.

    Mark
  • 535773
    535773 Member Posts: 9
    Hi Mark,
    When you say HashMap I assume you mean StoredMap (in >>com.sleepycat.collections), is that correct? If you are using a HashMap directly, >>you'll need to store the HashMap in the database.
    I am sorry, that I did not make it clearer. Yes, it is correct that I am using StoredMap to store and retrieve values.

    If it is practical for you to close and re-open the Environment in order to see >>changed data, then using a reader process that opens the Environment will >>work for you. If this is not practical, the solution is for you to send the read >>requests to the writer process, perhaps using a web service interface that you >>implement in your application. In other words, your writer process would >>perform all reads and writes to the Environment, and other processes would >>interact with the environment only by sending messages to the writer process.
    Yes, it is pratical for our application to re-open the Environment to see the changes.

    Thank You Mark, for pointing me to right direction.

    Thanks again
  • Greybird-Oracle
    Greybird-Oracle Member Posts: 2,690
    Yes, it is pratical for our application to re-open
    the Environment to see the changes.

    Thank You Mark, for pointing me to right direction.
    You're welcome, and I'm glad that re-opening the Environment will work for you.

    Mark
  • 535773
    535773 Member Posts: 9
    Hi Mark,

    Unfortunately my joy was shortlived. :(.

    Since there are multiple processes that reads the data, the solution worked for only one reader process. But, when I started my second process, it threw following exception

    com.sleepycat.je.DatabaseException: (JE 3.0.12) Environment.setAllowCreate is false so environment creation is not permitted, but there is no pre-existing environment in ./data

    Please not that the writer process is continously writing the data without closing database or enviroment. It does call env.sync() after ever call of StoredMap.put(index, value).

    Any suggestions ?

    Thanks
  • Greybird-Oracle
    Greybird-Oracle Member Posts: 2,690
    Hi Mark,

    Unfortunately my joy was shortlived. :(.
    :-(
    Since there are multiple processes that reads the
    data, the solution worked for only one reader
    process. But, when I started my second process, it
    threw following exception

    com.sleepycat.je.DatabaseException: (JE 3.0.12)
    Environment.setAllowCreate is false so environment
    creation is not permitted, but there is no
    pre-existing environment in ./data
    This error should not occur when opening multiple read-only processes.

    Are you using NFS or another network file system? JE does file locking to enforce reader and writer process restrictions, and file locking does not always work properly on NFS.

    In any case I will try to reproduce this problem and post back here with the results.
    Please not that the writer process is continously
    writing the data without closing database or
    enviroment. It does call env.sync() after ever call
    of StoredMap.put(index, value).
    That approach in the writer process should be fine.

    Mark
  • 535773
    535773 Member Posts: 9
    Mark,

    Yes, we are using NFS for reader process. Let me first consult my Network Admin, to make sure that I am not screwing up at our end.

    I will update you shortly.

    Thanks for looking into it.
  • Greybird-Oracle
    Greybird-Oracle Member Posts: 2,690
    Yes, we are using NFS for reader process.
    Please see this FAQ on NFS support in JE:

    http://www.oracle.com/technology/products/berkeley-db/faq/je_faq.html#1

    Mark
  • 535773
    535773 Member Posts: 9
    Mark,

    Yes, I sanned through FAQ before implementing the sample programs.

    According to FAQ:
    If you cannot rely on flock() across NFS on your systems, you could handle (1) by >taking responsibility in your application to ensure that there is a single writer >process attached. Having two writer processes in a single environment could result >in database corruption. (Note that the issue is with processes, and not threads)
    We do take responsibilty of maintaining a writer in single environment.

    And, about the exception: One of NFS server was not configured properly, so it was fault at our end.

    So, the sample writer and reader sample programs just worked fine. I was able to read and validated the data from multiple processes across NFS.

    And once again, Thanks for responding so quickly.
  • Greybird-Oracle
    Greybird-Oracle Member Posts: 2,690
    We do take responsibilty of maintaining a writer in
    single environment.
    OK.
    And, about the exception: One of NFS server was not
    configured properly, so it was fault at our end.

    So, the sample writer and reader sample programs just
    worked fine. I was able to read and validated the
    data from multiple processes across NFS.
    That's great news!

    Mark
This discussion has been closed.