2 Replies Latest reply on Sep 17, 2013 2:50 PM by JGSANA72

    Hardware requirements


      Hisome time ago I'm looking for some way of determining the optimal hardware for proper operation of the oracle mapviewer. I have millions of records to be represented on a map by means of heatmaps, the results were not the best and a poor response rate.
      My question is if there is any document where I can get the hardware requirements for optimal display millions of data and not die trying.

      Best regards and already thank you very much

        • 1. Re: Hardware requirements

          Were you using a HeatMap style defined in map builder and generating the heat map on the server side? (in other words, not using the new HTML5 API's client side heatmap functions).   If that's the case, then you should be able to handle couple million points.  Assuming you are using the default heat map algorithm (but experiment with all 3 algorithms on a smaller sample of your data set to see which performs better), you should set the JVM heap memory to at least 2GB such as -Xmx2500MB -Xms2500MB  which will ensure the JVM starts with 2.5GB memory. If you are getting out of memory errors then you will need to increase the heap a bit more.  Also monitor the mapviewer logs to see if the bottleneck is in actually loading the data points from database into MapViewer.   Finally make sure the JVM starts with -Djava.awt.headless=true if your mapviewer server is running on a Linux/Unix box.  With these settings you don't really need a top end box to implement the heat map.  Another option is to sample your dataset (using the database SQL's sampling function) down to 10% of the original data table for instance. In some cases it produces just as accurate a heat map as with the raw data set.  When you generate a heat map from millions of points, a lot of the nuances in the data set are lost anyway.




          • 2. Re: Hardware requirements

            Thanks for your answer lqian, finally I have to buy new hardware, what features would have to take into processors, memory and space.?

            considering the high volume of records