From a purely database perspective the main downside of fewer larger databases is that when databases get very large (and I'm talking TB scale here) then things like startup time and duplicate time become much larger. You can mitigate these issues by using fast storage (SSD/flash) and fast networks (10GbE). TimesTen has very good vertical scalability, especially if you leverage recent features such as B+-tree indexes instead of T-Trees, and replication throughput can also be parallelised, so in most cases using a single database is fine and it can often simplify the application design compared to multiple separate databases. You mention a database size of 200 GB and I would characterise that as 'normal' nowadays. Most customers today have databases in the 10s to 100s of GB range and we have quite a few TB scale deployments too.
You might find the newly released TimesTen Scaleout feature of the TimesTen 18.1 release of interest; this is a horizontal scale-out capability for TimesTen that retains most of the functionality of current TimesTen but allows you to scale horizontally across multiple machines in a manner that is transparent to applications thus allowing a single database to leverage the storage and compute power of multiple nodes. Check out the TimesTen OTN portal and blog for lots of details on this exciting new capability.
I guess the takeaway is that a mere 200GB is not a large datastore, at least these days.
I know that configuring large pages for the datastore segment is a good idea in general, but I do wonder at what point it becomes critical.
Once your database gets to be in the 10s of GB size range, using large pages will help with performance somewhat. For databases >= 256 GB in size use of large pages is mandatory (Linux OS requirement).