This discussion is archived
8 Replies Latest reply: Apr 20, 2011 4:32 PM by 854830 RSS

Pros and Cons of using REST over JMS (and other technologies)

854830 Newbie
Currently Being Moderated
Hey all,

I am working on a project where we were using JMS initially to send messages between servers. Our front end servers have a RESTful API and use JEE6, with EJB 3.1 entity beans connected to a mysql database and so forth. The back end servers are more like "agents" so to speak.. we send some work for them to do, they do it. They are deployed in GlassFish 3.1 as well, but initially I was using JMS to listen to messages. I learned that JMS onMessage() is not threaded, so in order to facilitate handling of potentially hundreds of messages at once, I had to implement my own threading framework. Basically I used the Executor class. I could have used MDBs, but they are a lot more heavyweight than I needed, as the code within the onMessage was not using any of the container services.

We ran into other issues, such as deploying our app in a distributed architecture in the cloud like EC2 was painful at best. Currently the cloud services we found don't support multi-cast so the nice "discover" feature for clustering JMS and other applications wasn't going to work. For some odd reason there seems to be little info on building out a scalable JEE application in the cloud. Even the EC2 techs, and RackSpace and two others had nobody that understood how to do it.

So in light of this, plus the data we were sending via JMS was a number of different types that had to all be together in a group to be processed.. I started looking at using REST. Java/Jersey (JAX-RS) is so easy to implement and has thus far had wide industry adoption. The fact that our API is already using it on the front end meant I could re-use some of the representations on the back end servers, while a few had to be modified as our public API was not quite needed in full on the back end. Replacing JMS took about a day or so to put the "onmessage" handler into a REST form on the back end servers. Being able to submit an object (via JAXB) from the front servers to the back servers was much nicer to work with than building up a MapMessage object full of Map objects to contain the variety of data elements we needed to send as a group to our back end servers. Since it goes as XML, I am looking at using gzip as well, which should compress it by about 90% or so, making it use much less bandwidth and thus be faster. I don't know how JMS handles large messages. We were using HornetQ server and client.

So I am curious what anyone thinks.. especially anyone that is knowledgeable with JMS and may understand REST as well. What benefits do we lose out on via JMS. Mind you, we were using a single queue and not broadcasting messages.. we wanted to make sure that one and only one end server got the message and handled it.

Thanks..look forward to anyone's thoughts on this.
  • 1. Re: Pros and Cons of using REST over JMS (and other technologies)
    jtahlborn Expert
    Currently Being Moderated
    well, there's a lot here, i'll try to take a few stabs.

    first of all, i'm not sure why you didn't use MDB's. you say you aren't using any of the container "services", but clustering and scalability are container "services". clustering and scalability are exactly the types of things which jee containers are supposed to handle for you. a decent MDB container should be able to scale the number of threads based on load as well as spread the load across a cluster of servers. not to mention other things like persistence/reliability.

    as for the clustering problem, i don't know the details of glassfish clustering. i know that in jboss (which uses jgroups for clustering), you can run clustering over unicast by using a few initial ip addresses to bootstrap the cluster. (i doubt you will find a cloud which supports multicast).

    it sounds like you are basically using rest as a replacement for the client-to-server "jms" transport. whether or not that is a reasonable replacement for jms is pretty much up to your requirements. your solution will require you to handle persistence/reliability and any load-balancing/scaling. in a decent app server, all these things should be handled for you. if you don't need most of those things, then your solution could be quite reasonable.
  • 2. Re: Pros and Cons of using REST over JMS (and other technologies)
    854830 Newbie
    Currently Being Moderated
    Thank you for the reply. One of the main reasons to switch to REST was JMS is strongly tied to Java. While I believe it can work with other message brokers that other platforms/languages can also use, we didn't want to spend more time researching all those paths. REST is very simple, works very well and is easy to implement in almost any language and platform. Our architecture is basically a front end rest API consumed by clients, and the back end servers are more like worker threads. We apply a set of rules, validations, and such on the front end, then send the work to be done to the back end. We could do it all in one server tier, but we also want to allow other 3rd parties to implement the "worker" server pieces in their own domains with their own language/platform of choice. Now, with this model, they simply provide a URL to send some REST calls to, and send some REST calls back to our servers.

    As for load balancing, I am not entirely sure how glassfish or JBoss does it. Last time I did anything with scaling, it involved load balancers in front of servers that were session/cookie aware for stateful needs, and could round robin or based on some load factor on each server send requests to appropriate servers in a cluster. If you're saying that JBoss and/or GlassFish no longer need that.. then how is it done? I read up on HornetQ where a request sent to one ip/hornetq server could "discover" other servers in a cluster and balance the load by sending requests to other hornetq servers. I assume this is how the JEE containers are now doing it? The problem with that to me is.. you have one server that is loaded with all incoming traffic and then has to resend it on to other servers in the cluster. With enough load, it seems that the glassfish or jboss server become a load balancer and not doing what they were designed to do.. be a JEE container. I don't recall now if load balancing is in the spec or not..I would think it would not be required to be part of a container though, including session replication and such? Is that part of the spec now?

    I still would think dedicated load balancers, whether physical hardware or virtual software running in a cloud/VM setup would be a better solution for handling load to different tiers?

    Thanks
  • 3. Re: Pros and Cons of using REST over JMS (and other technologies)
    EJP Guru
    Currently Being Moderated
    JMS is strongly tied to Java.
    It is a Java API.
    While I believe it can work with other message brokers that other platforms/languages can also use
    That's all that JMS does. It doesn't do anything else. It is an interface to message brokers, and message brokers all have APIs to other languages; indeed, until the advent of JMS, that is all they had. JMS isn't a message broker itself.
  • 4. Re: Pros and Cons of using REST over JMS (and other technologies)
    jtahlborn Expert
    Currently Being Moderated
    851827 wrote:
    Thank you for the reply. One of the main reasons to switch to REST was JMS is strongly tied to Java. While I believe it can work with other message brokers that other platforms/languages can also use, we didn't want to spend more time researching all those paths. REST is very simple, works very well and is easy to implement in almost any language and platform. Our architecture is basically a front end rest API consumed by clients, and the back end servers are more like worker threads. We apply a set of rules, validations, and such on the front end, then send the work to be done to the back end. We could do it all in one server tier, but we also want to allow other 3rd parties to implement the "worker" server pieces in their own domains with their own language/platform of choice. Now, with this model, they simply provide a URL to send some REST calls to, and send some REST calls back to our servers.
    well, this sounds like this would be one of those requirements which might make jms not a good fit. as ejp mentioned, message brokers usually have bindings in multiple languages, so jms does not necessarily restrict you from using other languages/platforms as the worker nodes. using a REST based api certainly makes that more simple, though.
    As for load balancing, I am not entirely sure how glassfish or JBoss does it. Last time I did anything with scaling, it involved load balancers in front of servers that were session/cookie aware for stateful needs, and could round robin or based on some load factor on each server send requests to appropriate servers in a cluster. If you're saying that JBoss and/or GlassFish no longer need that.. then how is it done? I read up on HornetQ where a request sent to one ip/hornetq server could "discover" other servers in a cluster and balance the load by sending requests to other hornetq servers. I assume this is how the JEE containers are now doing it? The problem with that to me is.. you have one server that is loaded with all incoming traffic and then has to resend it on to other servers in the cluster. With enough load, it seems that the glassfish or jboss server become a load balancer and not doing what they were designed to do.. be a JEE container. I don't recall now if load balancing is in the spec or not..I would think it would not be required to be part of a container though, including session replication and such? Is that part of the spec now?
    you are confusing many different types of scaling. different layers of the jee stack scale in different ways. you usually scale/load balance at the web layer by putting a load balancer in front of your servers. at the ejb layer, however, you don't necessarily need that. in jboss, the client-side stub for invoking remote ejbs in a cluster will actually include the addresses for all the boxes and do some sort of work distribution itself. so, no given ejb server would be receiving all the incoming load. for jms, again, there are various points of work to consider. you have the message broker itself which is scaled/load balanced in whatever fashion it supports (don't know many details on actual message broker impls). but, for the mdbs themselves, each jee server is pretty independent. each jee server in the cluster will start a pool of mdbs and setup a connection to the relevant queue. then, the incoming messages will be distributed to the various servers and mdbs accordingly. again, no single box will be more loaded than any other.

    load balancing/clustering is not part of the jee "spec", but it is one of the many features that a decent jee server will handle for you. the point of jee was to specify patterns for doing work which, if followed, allow the app server to do all the "hard" parts. some of those features are required (transactions, authentication, etc), and some of those features are not (clustering, load-balancing, other robustness features).
    I still would think dedicated load balancers, whether physical hardware or virtual software running in a cloud/VM setup would be a better solution for handling load to different tiers?
    like i said, that depends on the tier. makes sense in some situations, not others. (for one thing, load-balancers tend to be http based, so they don't work so well for non-http protocols.)
  • 5. Re: Pros and Cons of using REST over JMS (and other technologies)
    854830 Newbie
    Currently Being Moderated
    Hi,

    Very good point on the client side load balancing to ejb servers. I recall reading that at one time. I am not entirely sure how if you have say 100 client side web tier servers, and 500 ejb servers, that each of the client sides would know how to load balance between them to all the ejb servers. Do all the client sides communicate together to get a picture of the load on any given ejb server? For example, if client side server 1 sends something to ejb server 1 and let's assume ejb server 1 is now loaded..it can't handle any more (or shouldn't without slowing things down), how does client side server 2, 3 and 4 know not to send anything to ejb server 1.. does client side 1 update all the other client side servers with info that it sent something to ejb server1.. so go use some other ejb server? It's easy to picture if it's a single client side server sending to 2, 3 or so ejb servers..but when you have a tier of client side servers sending to a tier of ejb servers, I am not sure how the whole load balancing works there.

    We also decided we wanted it completely http based, over SSL. Our architecture is completely stateless.. or..rather.. almost.. we don't have for example a session requirement, like a web cart or someone logged in. The client consumer of our API provides credentials on every request. I would hope by now there is some sort of ajax library/functionality out there that could maintain user auth credentials and pass them in on every ajax request as needed if I wanted to for example build a web piece that provided a login functionality, instead of using cookies and requiring something like httpsession state. I haven't explored that for some time so not sure off hand.

    Another reason I didn't care for the JMS route is that we have a requirement that a number of pieces of data need to all be at the end server at one time before it can do some work. I read about the ability of JMS to aggregate pieces of data before doing something, but it just didn't jive with me. On my web tier side, I have all the data at one time, so I want to send a single message. The only way I can do so is to use the ObjectMessage type, which I ended up stuffing with a Map of Maps, each Map has different pieces of data that all needed to be at the back end server at the same time. From what I've read ObjectMessage is not a good practice to use if possible. I may have missed some ability to do this better via JMS, but since the REST api exists on the front end, I can reuse a lot of the code and after the validation and such is done, almost pass the same chunk of xml/json that came in to the back end.. with some filtering applied as needed. Thus, with all these factors, it seemed to me that there was no great benefit of using JMS over a pure REST implementation.
  • 6. Re: Pros and Cons of using REST over JMS (and other technologies)
    jtahlborn Expert
    Currently Being Moderated
    851827 wrote:
    Very good point on the client side load balancing to ejb servers. I recall reading that at one time. I am not entirely sure how if you have say 100 client side web tier servers, and 500 ejb servers, that each of the client sides would know how to load balance between them to all the ejb servers. Do all the client sides communicate together to get a picture of the load on any given ejb server? For example, if client side server 1 sends something to ejb server 1 and let's assume ejb server 1 is now loaded..it can't handle any more (or shouldn't without slowing things down), how does client side server 2, 3 and 4 know not to send anything to ejb server 1.. does client side 1 update all the other client side servers with info that it sent something to ejb server1.. so go use some other ejb server? It's easy to picture if it's a single client side server sending to 2, 3 or so ejb servers..but when you have a tier of client side servers sending to a tier of ejb servers, I am not sure how the whole load balancing works there.
    i'm hazy on the details, i didn't look too closely, and it was a long time ago. i believe jboss supported different client stubs with different behavior, which was configurable via their "call stack" configuration mechanism (i think the default was round-robin, but i think they had something smarter than that). my point was, again, that this is an app server implementation detail. it can be solved in a variety of ways depending on the quality of the app server.
    Another reason I didn't care for the JMS route is that we have a requirement that a number of pieces of data need to all be at the end server at one time before it can do some work. I read about the ability of JMS to aggregate pieces of data before doing something, but it just didn't jive with me. On my web tier side, I have all the data at one time, so I want to send a single message. The only way I can do so is to use the ObjectMessage type, which I ended up stuffing with a Map of Maps, each Map has different pieces of data that all needed to be at the back end server at the same time. From what I've read ObjectMessage is not a good practice to use if possible. I may have missed some ability to do this better via JMS, but since the REST api exists on the front end, I can reuse a lot of the code and after the validation and such is done, almost pass the same chunk of xml/json that came in to the back end.. with some filtering applied as needed.
    this doesn't sound like an argument one way or the other (i've never heard that ObjectMessage is a "bad practice"). either way, you need to package the data up for processing on the server side, right? 6 of one...
    Thus, with all these factors, it seemed to me that there was no great benefit of using JMS over a pure REST implementation.
    sure. to me, the major wins for jms are guaranteed delivery and load balancing (also, if you are interacting with a db, transaction handling). sounds like you don't need most of that and are planning on handling the load balancing yourself, and there are other requirements which are make a REST api more convenient. anyway, hope i helped fill out your understanding of jms.
  • 7. Re: Pros and Cons of using REST over JMS (and other technologies)
    EJP Guru
    Currently Being Moderated
    It seems to me you are going backwards here, and for very poor reasons. Message brokers are seriously sophisticated pieces of software. Compared to that, anything you can implement with HTTP GET PUT POST DELETE and a Web server is just a toy.
  • 8. Re: Pros and Cons of using REST over JMS (and other technologies)
    854830 Newbie
    Currently Being Moderated
    You may be right EJP.. not sure at this point. Adoption of our product isn't going to be easy.. making it easy to adopt with a technology that is easy to implement on any platform/language seems more appropriate however. Internally however, it could be just as easy to use JMS for most things..and in the case of an external adopter offer a way to send messages via REST so that they can implement an end server without needing a messaging system.

    We're still learning as we go, so it's possible we'll go back to JMS for internal use and maybe external use as well. For now this is working well and I can bring up several end server VMs and they all work for individual uses.. haven't tried a cluster yet but am looking at using HAProxy in between to see how that works out.

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points