Were you using a HeatMap style defined in map builder and generating the heat map on the server side? (in other words, not using the new HTML5 API's client side heatmap functions). If that's the case, then you should be able to handle couple million points. Assuming you are using the default heat map algorithm (but experiment with all 3 algorithms on a smaller sample of your data set to see which performs better), you should set the JVM heap memory to at least 2GB such as -Xmx2500MB -Xms2500MB which will ensure the JVM starts with 2.5GB memory. If you are getting out of memory errors then you will need to increase the heap a bit more. Also monitor the mapviewer logs to see if the bottleneck is in actually loading the data points from database into MapViewer. Finally make sure the JVM starts with -Djava.awt.headless=true if your mapviewer server is running on a Linux/Unix box. With these settings you don't really need a top end box to implement the heat map. Another option is to sample your dataset (using the database SQL's sampling function) down to 10% of the original data table for instance. In some cases it produces just as accurate a heat map as with the raw data set. When you generate a heat map from millions of points, a lot of the nuances in the data set are lost anyway.
Thanks for your answer lqian, finally I have to buy new hardware, what features would have to take into processors, memory and space.?
considering the high volume of records