This content has been marked as final. Show 19 replies
Well you can tune your database and you can tune your application, here is some advice on application tuning. Perhaps some APEX DBA's can help answer the db configuration question.
Before you start tuning you want to measure performance. By running a page in debug mode you can see how long each portion of a page takes to render and process. If you can reduce page generation time and page processing time you application will run faster.
Most APEX applications are very fast, but if you want to make your APEX as fast as possible (afap) then:
0. Use bind variables, e.g. select ename from emp where deptno = :p1_deptno
1. Write efficient SQL, avoid use plsql functions in the select list when possible, at the end of the day its all about the SQL.
2. Don't over use images on the page, or use overly complex templates (often html rendering time can exceed html generation time)
3. Use declarative conditions when possible. PL/SQL conditions are slower then declarative.
4. If you code large PL/SQL processes or regions move this code to a PL/SQL package and call the package, this avoids excessive anonymous block parsing
5. Use html buttons and not template based buttons, less html = less time to generate and render
Message was edited by:
John Scott of APEX Evangelists has a whole presentation on increasing performance for APEX, I can't find link right now but maybe someone else has it.
Do you feel that the performance is not fast? What are your performance goals ("make it faster is not a valid answer")?
Pinning the objects is not likely going to do much for you. The most used objects will not age out of the shared pool / buffer cache since they are used often.
Prominent figures in the Oracle community have concluded that separating data and indexes into separate tablespace doesn't really buy you anything and is generally classified as an Oracle myth.
Are you running the Embedded PL/SQL Gateway, or Oracle HTTP Server (Apache)? Apache will likely run circles around EPG. Carl's suggestion of mod_gzip is a good one for perceived performance. You should also consider installing the HTTP Server on a separate machine. Run NOTHING else on that machine except the database.
Only create one database. Don't let people create multiple database instances on the same machine. This is a HUGE waste of resources and you don't have a lot of CPU cycles to spare with a single Celeron.
the mentioned white paper John has written can be found at http://jes.blogs.shellprompt.net/2007/05/18/apex-delivering-pages-in-3-seconds-or-less/
My APEX Blog: http://www.inside-oracle-apex.com/
The APEX Builder Plugin: http://builderplugin.oracleapex.info/ New
The ApexLib Framework: http://apexlib.sourceforge.net/
In the DB host I am using Embedded PL/SQL Gateway (11G), and I read it usually works faster than Apache. Then I have a reverse proxy server with Apache 2 to handle security using mod_security, and this is the server that do the calls to the EPG on 11G server.
I already use mod_deflate and with the common rules about avoiding to compress already compressed files (pdf, jpg, png, etc).
The server that has the 11G database only has it installed... beside the OS :) :) :)
Now a valid answer... My performance goals is to make it faster :):):):) Kidding... I want to assure that with that hardware I talked about i can assure a 3 second or less page view for 40 concurrent users. What do you think? Is it possible? My pages are somehow complex... Imagine salesforce.com or made on APEX :) ... or Basecamp made on APEX... that's what I am talking about :) Free for people to try... and I hope buy too!
I am almost sure that we will be the first oracle partenrs that will deliver the first CRM solution that only needs and Oracle Database to run :)
Btw, I will thanks my Oracle Education Tuning course instructors cause they teach me that separating data and indexes is good for performance!!!
But I doubt that pinning frequently used small tables is not good for performance :)
A bit of a thread HiJack, but this statement piqued my interest:
"Prominent figures in the Oracle community have concluded that separating data and indexes into separate tablespace doesn't really buy you anything and is generally classified as an Oracle myth."
I define my tablespaces with different blocksizes and put my indexes into the largest blocksize tablespace, and of course setup different buffer_cache_pools for the various blocksizes. If you have some hyperlinks to papers that say it is a myth that doing so can improve performance (by managing data and index blocks in different LRU lists and optimizing index reads for large block reads), I'd like to read them.
If you have some hyperlinks to papersTry this link to a blog entry by Tom Kyte -
Is tehre a way to set Automatic Memory Management in APEX?
Within that blogpost (I too very much enjoy Mr. Kyte's writings), I found the main post of interest to be:
"Thomas Kyte said.... moving table data into a 2k block - would increase concurrency as you store fewer rows per block, less people going after the same bits, but then - putting the index into a 16k block massively decreases that since you get so many more index entries per block."
I will have to test this; does that imply the massive(?) decrease would be on writes/updates? Our application is 90% reads via primary key based lookups (i.e index-reads).
I'm not trying to dismiss your question off-hand, however it's probably a more appropriate question for the Database General forum -
General Database Discussions
Is tehre a way to set Automatic Memory Management in APEX?APEX runs inside the database, so that is something you would set at the database level, it's not something related to APEX per-se.
Fair enough. I was thinking the same thing as it was getting off topic.
Thank you for the input John.
and I read it usually works faster than Apache.I'm curious...where did you read that? Can you point me to a link?
Getting back to your original question, the thing about performance is that it's usually very subjective...your site can support HUGE numbers of end users if those end users don't actually do much, conversely your site might crawl to a halt with a handful of users if those users are throwing hammer and tongs at it.
With websites it's not really numbers of users that matters, it's the number of web-requests, i.e. whether you have 1000 users requesting 1 page each of 20 users requesting 50 pages each, the end result may well be the same (it just sounds 'worse' to say you can only support 20 users rather than 1000).
However, given the hardware you have described, I'll stick my head on the block and say that hardware should easily be capable of supporting the number of end users you have mentioned...however there is a big caveat:
1) A badly designed application can severely limit your scalability
2) A badly configured/tuned DB can severely limit your scalability
3) A badly configured OS can severely limit your scalability
Often with scalability your tend to look at it in a 'growing' sense, i.e. what do you need to do to support more and more users.
I often look at it the other way around, with the 'raw' hardware you have, you have a blank canvas that can in theory support X number of end users (where X is an unknown number at this point). However it's what you do going forward that will decide how much that X decreases by.
Yep you read that right, everything you do from setting up the OS, to configuring the DB, through to designing the app is potentially going to restrict how many users your application can support from that original theoretical maximum of 'X'.
Ok, I'm rambling a little now, but sometimes it helps to flip your mindset I think.
In bullet point form, to help with the scalability of your application, I would certainly list these 4 factors as being near the top of my list of things to investigate -
1) mod_gzip (as already mentioned, I also detail it in my paper that has been linked to). Compression has a potentially massive boost on the scalability of your infrastructure
3) Checkout your usage of the V() function in your queries, if you use it a lot be aware that it's non-deterministic, i.e. the function could be evaluated for each and every row. Contrast this with bind notation.
For more details on this checkout my blog post on it -
4) Use Page/Region caching. A fantastic feature which allows you to tell APEX that it's ok to use a pre-cached version of a page/region rather than generating it dynamically each time.
If you have a heavily hit website, then you could almost certainly benefit from caching the home page even by a default of 10 seconds (depending of course whats on your home page).
I have an example I use in my presentation on performance techniques where I simply create two pages in an application and create a slow running report on one page (say it takes 5 or so seconds to execute) and then flick between the two tabs.....the end user has to wait 5 seconds each time they flick onto the page with the report.
Once you switch on caching for the report region (say with a 10 second expiry), flicking between the pages becomes 'instant', until the cache ages out and the report needs to be regenerated dynamically.
Typically there are many places where users don't need to see the absolutely latest data and it's 'ok' to show them the report as it was 10 seconds ago.
However the APEX team have made things even easier for you as there are API routines you can call to programmatically purge cached items, giving the users the option to see the latest version if they wish to.
Hope this long rambling post helps in some way....
So... any hints more about improving APEX performance?
Caching pages is a good hit, but when the content of a page is customized to a specific user in a collaboration environment we must create dynamic caching validation functions... it's hard, but we'll do it!
I think the main bottleneck in our apps is the dynamic content rendering, depending on user preferences and priviledges. That why I asked about Increasing APEX performance!!!!
For instance, apex.oracle.com is based on a Dell PE1950, NAS storage, 16GB 667MHz (8x2GB) and CPU - 2 x Dual Core XESn... It's impressive that a single server serves so much people. Sure the NAS storage makes a big part of the performance factor, but I guess that 16GB os memory and a SGA of about 2GB can make the difference.
Things like... Database Shared Server Modeor Dedicated Mode? Pinning most used frequent objects, including tables that are used in every page processing? Sizing the SGA with a formula factor that you guys usually use, etc...
I was hoping to get hints like that in this post... And I still hope to get :)
(PS: I WAS POSTING WHILE JES WAS TOO)
Message was edited by: