This discussion is archived
0 Replies Latest reply: Oct 11, 2010 1:34 AM by JimKlimov RSS

JMQ cluster and unstable connections

JimKlimov Newbie
Currently Being Moderated
Hello all.

I have a few architectural questions about building an OpenMQ message-passing infrastructure between multiple offices which do not always have on-line internet connections. We also need to distribute the MQ mesh configuration info.

From the scale of my questions it seems, that I or our developers don't fully understand MQ, because I think that many of our problems and/or solution ideas (below) should already be implemented within the MQ middleware, and not by us from outside it.

The potential client currently has a (relatively ugly) working solution which they wanted to revise for simplification, if possible, but this matter is not urgent and answers are welcome at any timeframe :)

I'd welcome any insights, ideas and pointers as to why our described approach may be plain wrong :)

--

To sum this post up, here's my short questionnaire:

1) What is a good/best way to distribute MQ mesh config when not all nodes are available simultaneously?

2) What are the limitations on number of brokers and queues in one logical mesh?

3) Should we aim for separate "internal" and "external" MQ networks, or can they be combined into one large net?

4) Should we aim for partial solution external to OpenMQ (such as integration with SMTP for messaging, or SVN for config distribution), or can this quest be solved within OpenMQ functionality?

5) Can a clustered broker be forced to fully start without available master broker connection?

6) Are broker clusters inherently local-network, or is there some standard solution (pattern) for geographically disperse MQ clusters?

7) How to enforce pushing of the messages from one broker to another? Are any priority assignments available for certain brokers and "their" queues?

Detailed rumblings follow below...

--

We are thinking about implementing JMQ in a geographically disperse project, where it will be used for asynchronous communications to connect application servers in different branch offices with a central office. The problematic part is, that the central and especially branch offices are not expected to be always on-line, hence the MQ - whenever a connection is available, queued messages (requests, responses, etc.) are to be pushed to the other side's MQ broker. And if all goes well with the project, there may eventually be hundreds of such branch offices and more than one central office for failover, and a mesh of interconnection MQ agreements.

The basic idea is simple: an end-user of the app server in a branch generates a request, this request is passed via message queue to another branch or to a central office, then another app server processes it to generate a response and the answer is queued back to the requesting app server. At some time after the initial request, the end-user would see in his web-page that the request's status has been updated with a response value. A branch office's app server and MQ broker may be an appliance-server distributed as a relatively unmaintained "black box".

During the POC we configured several JMQ broker instances in this manner and it worked. From what I gather from our developers, each branch office's request and response queues are separate destinations in the system, and requests (from a certain branch) may be subscribed by any node, and responces (to a certain branch) may be submitted by any node. This may be restricted by passwords and/or certificate-based SSL tunnel channels, for example (suggestions welcome, though).

However, we also wanted to simplify spreading the configuration of the MQ nodes' network by designating "master brokers" (as per JMQ docs) which keep track of the config and each other broker downloads the cluster config from its master. Perhaps it was wrong on our side, and a better idea is available to avoid manual reconfiguration of each MQ broker whenever another broker or a queue destination is added?

Problem here is: it seems an "MQ cluster" is a local-network oriented concept. When we have a master broker in a central office, and the inter-connection is not up, branch offices loop indefinitely waiting for connection to a master, and reject client connections (published JMS port remains 0, and appropriate comments in the log files). In this case the branch office can not function until its JMQ broker connects to a central office, updates the MQ config, and permits client connections to itself.

Also we are not certain (and it seems to be a popular question on Google, too) how to enforce a queued message to be pushed to another side - to a broker "nearest" to the target app server? Can this be done within OpenMQ config, or does this require an MQ client application to read and manipulate such messages somehow? For example, when a branch office's "request" queue has a message, and a connection to central office comes online, this request data should end up in the central office's broker. Apparently, a message which physically remains in the branch office broker when the interconnection goes offline, is of little use to the central appserver...

I was thinking along the lines of different-priority brokers for a certain destinations, so that messages would automatically flow from further brokers to neares ones - like water flows from higher ground to lower ground in an aqueduct. It would then be possible to easily implement transparent routing between branch offices (available at non-intersecting times) via central office (always up).

How many brokers and destination can be interconnected at all (practically or theoretically/hardcoded)?

Possibly, there are other means to do some or all of this?

--

Ideas we've discussed internally include:

* Multiple networks of MQ brokers:
Have an "internal" broker (cluster) in each branch office which talks to the app server, and a separate "external" broker which is clustered with the central office's "master broker". Some branch office application transfers messages between two brokers local to its branch. Thus the local appserver works okay, and remote queuing works whenever network is available.
Possibly, the central office should also have separate internal and external broker setups?

* Multi-tiered net of MQ brokers:
Perhaps there can be "clusters of clusters" - with "external" tier-1 brokers being directly master brokers for local "internal" tier-2 clusters? Otherwise the idea of "miltiple networks of MQ brokers" above, without an extra app to relay messages between MQ brokers local to this app.

* Multi-protocol implementation of MQ+SMTP(+POP3/IMAP)
Many of our questions are solvable by SMTP. That is, we can send messages to a mailbox residing on a specific server (local in each office), and local appserver clients retrieve them by POP3 from the local mailbox server, and then submit responses over SMTP. This is approximately how the client currently solves this task now.
We don't really want to invent a bicycle, but maybe this approach can also be applied to JMQ (asynch traffic not over MQ protocol, but over SMTP like in SOAP-SMTP vs. SOAP-HTTP webservices)?

* HTTP/RCS-based config file:
The OpenMQ config allows for the detailed configuration file to be available in local filesystem or on a web server. It is possible to fetch the config file from a central office whenever the connection is up (wget, svn/cvs/etc.) and restart the branch broker.
Why is this approach good or bad? Advocates welcome :)

---

Thanks for reading up to the end,
and thanks in advance for any replies,
//Jim Klimov

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points