JMS is strongly tied to Java.It is a Java API.
While I believe it can work with other message brokers that other platforms/languages can also useThat's all that JMS does. It doesn't do anything else. It is an interface to message brokers, and message brokers all have APIs to other languages; indeed, until the advent of JMS, that is all they had. JMS isn't a message broker itself.
851827 wrote:well, this sounds like this would be one of those requirements which might make jms not a good fit. as ejp mentioned, message brokers usually have bindings in multiple languages, so jms does not necessarily restrict you from using other languages/platforms as the worker nodes. using a REST based api certainly makes that more simple, though.
Thank you for the reply. One of the main reasons to switch to REST was JMS is strongly tied to Java. While I believe it can work with other message brokers that other platforms/languages can also use, we didn't want to spend more time researching all those paths. REST is very simple, works very well and is easy to implement in almost any language and platform. Our architecture is basically a front end rest API consumed by clients, and the back end servers are more like worker threads. We apply a set of rules, validations, and such on the front end, then send the work to be done to the back end. We could do it all in one server tier, but we also want to allow other 3rd parties to implement the "worker" server pieces in their own domains with their own language/platform of choice. Now, with this model, they simply provide a URL to send some REST calls to, and send some REST calls back to our servers.
As for load balancing, I am not entirely sure how glassfish or JBoss does it. Last time I did anything with scaling, it involved load balancers in front of servers that were session/cookie aware for stateful needs, and could round robin or based on some load factor on each server send requests to appropriate servers in a cluster. If you're saying that JBoss and/or GlassFish no longer need that.. then how is it done? I read up on HornetQ where a request sent to one ip/hornetq server could "discover" other servers in a cluster and balance the load by sending requests to other hornetq servers. I assume this is how the JEE containers are now doing it? The problem with that to me is.. you have one server that is loaded with all incoming traffic and then has to resend it on to other servers in the cluster. With enough load, it seems that the glassfish or jboss server become a load balancer and not doing what they were designed to do.. be a JEE container. I don't recall now if load balancing is in the spec or not..I would think it would not be required to be part of a container though, including session replication and such? Is that part of the spec now?you are confusing many different types of scaling. different layers of the jee stack scale in different ways. you usually scale/load balance at the web layer by putting a load balancer in front of your servers. at the ejb layer, however, you don't necessarily need that. in jboss, the client-side stub for invoking remote ejbs in a cluster will actually include the addresses for all the boxes and do some sort of work distribution itself. so, no given ejb server would be receiving all the incoming load. for jms, again, there are various points of work to consider. you have the message broker itself which is scaled/load balanced in whatever fashion it supports (don't know many details on actual message broker impls). but, for the mdbs themselves, each jee server is pretty independent. each jee server in the cluster will start a pool of mdbs and setup a connection to the relevant queue. then, the incoming messages will be distributed to the various servers and mdbs accordingly. again, no single box will be more loaded than any other.
I still would think dedicated load balancers, whether physical hardware or virtual software running in a cloud/VM setup would be a better solution for handling load to different tiers?like i said, that depends on the tier. makes sense in some situations, not others. (for one thing, load-balancers tend to be http based, so they don't work so well for non-http protocols.)
851827 wrote:i'm hazy on the details, i didn't look too closely, and it was a long time ago. i believe jboss supported different client stubs with different behavior, which was configurable via their "call stack" configuration mechanism (i think the default was round-robin, but i think they had something smarter than that). my point was, again, that this is an app server implementation detail. it can be solved in a variety of ways depending on the quality of the app server.
Very good point on the client side load balancing to ejb servers. I recall reading that at one time. I am not entirely sure how if you have say 100 client side web tier servers, and 500 ejb servers, that each of the client sides would know how to load balance between them to all the ejb servers. Do all the client sides communicate together to get a picture of the load on any given ejb server? For example, if client side server 1 sends something to ejb server 1 and let's assume ejb server 1 is now loaded..it can't handle any more (or shouldn't without slowing things down), how does client side server 2, 3 and 4 know not to send anything to ejb server 1.. does client side 1 update all the other client side servers with info that it sent something to ejb server1.. so go use some other ejb server? It's easy to picture if it's a single client side server sending to 2, 3 or so ejb servers..but when you have a tier of client side servers sending to a tier of ejb servers, I am not sure how the whole load balancing works there.
Another reason I didn't care for the JMS route is that we have a requirement that a number of pieces of data need to all be at the end server at one time before it can do some work. I read about the ability of JMS to aggregate pieces of data before doing something, but it just didn't jive with me. On my web tier side, I have all the data at one time, so I want to send a single message. The only way I can do so is to use the ObjectMessage type, which I ended up stuffing with a Map of Maps, each Map has different pieces of data that all needed to be at the back end server at the same time. From what I've read ObjectMessage is not a good practice to use if possible. I may have missed some ability to do this better via JMS, but since the REST api exists on the front end, I can reuse a lot of the code and after the validation and such is done, almost pass the same chunk of xml/json that came in to the back end.. with some filtering applied as needed.this doesn't sound like an argument one way or the other (i've never heard that ObjectMessage is a "bad practice"). either way, you need to package the data up for processing on the server side, right? 6 of one...
Thus, with all these factors, it seemed to me that there was no great benefit of using JMS over a pure REST implementation.sure. to me, the major wins for jms are guaranteed delivery and load balancing (also, if you are interacting with a db, transaction handling). sounds like you don't need most of that and are planning on handling the load balancing yourself, and there are other requirements which are make a REST api more convenient. anyway, hope i helped fill out your understanding of jms.