11 Replies Latest reply: Jan 31, 2010 11:58 PM by Billy~Verreynne RSS

    Oracle performance, slow for larger and more complex results.

    748848
      Hello Oracle forum,

      At the moment i have a Oracle database running and i'm specifically interested in the efficiency spatial extension for webmaps and GIS.
      I've been testing the database with large shape files (400mb - 1gigabyte) loaded them into the database with shp2sdo->sql*loader.

      Using Benchmark factory i've test the speed of transactions an these drop relatively quickly. I've started with a simple query:

      SELECT id FROM map WHERE id = 3 when I increase the amount of id's to 3-10000 the performance decreases drastically.
      so :
      SELECT id FROM map WHERE id >=3 and id <= 10000

      The explain plan shows the second query , both query's use the index.
      --------------------------------------------------------------------------------
      | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
      --------------------------------------------------------------------------------
      | 0 | SELECT STATEMENT | | 9828 | 49140 | 22 (0)| 00:00:01 |
      |* 1 | INDEX RANGE SCAN| SYS_C009650 | 9828 | 49140 | 22 (0)| 00:00:01 |
      --------------------------------------------------------------------------------

      Statistics
      ----------------------------------------------------------
      0 recursive calls
      0 db block gets
      675 consistent gets
      0 physical reads
      0 redo size
      134248 bytes sent via SQL*Net to client
      7599 bytes received via SQL*Net from client
      655 SQL*Net roundtrips to/from client
      0 sorts (memory)
      0 sorts (disk)
      9796 rows processed

      The statistics does not show very weird stuff, but maybe i'm wrong. Nothing changed in the explain plan except for the range scan instead of a unique scan.
      The query returns lots of results and this is I think the reason why my measured time of the query is large. The time it takes returning large amount of rows increases quickly for more rows.
      . Can this be solved? The table has been analyzed before starting the query.

      The parameters of the database are not really changed from standard, I increased the amount of memory used by Oracle 11g to 1+ gigabyte.
      and let the database itself decide how it uses this memory.
      The system specs are and db_parameters are:

      Oracle 11G

      Memory Processor # of CPUs OS OS Version OS B
      1.99 gb Intel(R) Core(TM)2 CPU 6600 @ 2.40GHz 2 Microsoft WindowsXP 5.2600

      0=Oracle decides which value will be given


      cursor_sharing EXACT
      cursor_space_for_time FALSE
      db_block_size 8192
      db_recovery_file_dest_size 2147483648
      diagnostic_dest C:\DBBENCHMARK\ORACLE
      dispatchers (PROTOCOL=TCP) (SERVICE=gistestXDB)
      hash_area_size 131072
      log_buffer 5656576
      memory_max_target 1115684864
      memory_target 1048576000
      open_cursors 300
      parallel_max_servers 20
      pga_aggregate_target 0
      processes 150
      resumable_timeout 2162688
      sort_area_size 65536
      Sga=632mb
      PGA=368mb
      javapool=16mb
      largepool=8mb
      other=8mb


      So I indexed and analyzed the data what did i forget? I can speed it up with soft parsing, but the problem remains . Hopefully this enough information for some analysis, does anyone experienced the same problems ? I tested with SQLdeveloper the speed and is shows the same speed as Benchmark factory. What could be wrong with the parameters?

      Thanks,

      Jan Martijn

      Edited by: user12227964 on 25-jan-2010 4:53

      Edited by: user12227964 on 26-jan-2010 2:20
        • 1. Re: Oracle performance, slow for larger and more complex results.
          Billy~Verreynne
          user12227964 wrote:

          Using Benchmark factory i've test the speed of transactions an these drop relatively quickly. I've started with a simple query:

          SELECT id FROM map WHERE id = 3 when I increase the amount of id's to 3-10000 the performance decreases drastically.
          so :
          SELECT id FROM map WHERE id >=3 and id <= 10000
          So..? Each time you add more rows that needs to be read from disk and return to the client. Very obviously this will be slower. It's like adding an additional 10km each time around to a journey. How can the travel time remain constant when the distance to travel is constantly increased?

          Scalability would for example be running a select like this:
          SQL> select * from map where id = :bindID ;

          And then increasing the number of rows in the map table. As an index scan should be used, the increase in the number of rows in the table should have a minimal performance impact on this query.

          But if you process more and more rows each time around.. those rows needs to be read (using expensive I/O) from disk. Then those rows need to be transported across the communication layer (typically TCP) from the server to the client. I fail to see why you would think that performance should remain consistent...?
          • 2. Re: Oracle performance, slow for larger and more complex results.
            748872
            Why not index fast full scan ?
            • 3. Re: Oracle performance, slow for larger and more complex results.
              748848
              thanks for your replies

              I understand that an increase in transaction time is expected with more rows...but the decrease is high and the decrease is even higher with Oracle spatial operations.

              The performance drops from a good transaction time in my opinion ( arround 0.25 ms) without binded variables to a longer time (330ms+) for 9796 rows more. Possibly it is a normal increase for a windows version of Oracle 11.0.6.0. I've heard there are some bugs in this version that could lead to performance problems.

              -->>Will try to invoke a fast full index scan

              Edited by: user12227964 on 25-jan-2010 5:47
              • 4. Re: Oracle performance, slow for larger and more complex results.
                748848
                As expected the fast full index scan is not used when a query is done of 10000 id's. When requesting all the id's it did do a fast full scan.
                • 5. Re: Oracle performance, slow for larger and more complex results.
                  Taral
                  I think it's completed with good timing isn't so

                  See Statistics after setting arraysize to 500

                  set arraysize 500
                  SELECT id FROM map WHERE id >=3 and id <= 10000;
                  • 6. Re: Oracle performance, slow for larger and more complex results.
                    Billy~Verreynne
                    user12227964 wrote:
                    thanks for your replies

                    I understand that an increase in transaction time is expected with more rows...but the decrease is high and the decrease is even higher with Oracle spatial operations.
                    Not an increase in "+transaction time+" (what is that?). Simply that the more data you need to lift from disk, the slower the operation will be as there is an inherant latency for getting bytes from a magnetic platter into computer RAM.

                    Using spatial functions, there are also data transformation and calculations to be done. This means for every row read, CPU overheads exist in order to crunch that data in the row to produce the required calculated/processed output.

                    These are basic computing concepts. Nothing specific or only applicable to Oracle.

                    So how do you address these? You use optimal I/O paths that provides the minimal I/O to get the required data. You do only the absolute minimum processing (spending CPU time) on data crunching and transformation. You only return the absolute minimum data needed (if the row is on average 128KB and you only need 8KB of it, only read and process the 8KB). Etc.

                    These are standard software engineering concepts. If your data transformation (e.g. calculating surface area of spatial object in m ^2^ ) requires 5ms per row, and you process a 50,000 rows, that is a 250 sec (4+ minutes) process overhead.

                    Alternatives are to pay for this overhead only once up front instead of every time that data is needed for the row - e.g. use a trigger to calculate once only and store the result with the row. (changing a CPU/row overhead into a space/row overhead).

                    There's very little "+wrong+" with Oracle performance. Yes, there are the odd bug that can affect performance. But in the vast majority of cases, poor Oracle performance is due to:
                    - poor data modeling
                    - poor physical db design for the data model
                    - inefficient SQL
                    - poor programming design for processing data
                    - incorrectly using Oracle
                    • 7. Re: Oracle performance, slow for larger and more complex results.
                      748848
                      I meant with transaction time; the time when the query is requested till the last result is returned. I made mistakes saying "decrease is high " I meant the increase is large. But possibly it isn't that large and is my reference transaction speed wrong.

                      Setting the arraysize to an higher number seems a good idea. I've tested it( arraysize = 50) and the amount of roundtrips lowered to a 197 but it did not improve the performance.. i test without network in between so this would lower the effect of the amount of packages send.

                      I still think a parameter is wrongly configured. I've also tested Oracle with ODBC drivers but this does not make much difference. (with benchmark factory) . Furthermore i can say that inefficient SQL and poor datamoddeling are not the problem in this stage. The others could be the problem, will check argh!.
                      • 8. Re: Oracle performance, slow for larger and more complex results.
                        748848
                        double post,

                        Edited by: user12227964 on 26-jan-2010 3:22
                        • 9. Re: Oracle performance, slow for larger and more complex results.
                          Billy~Verreynne
                          user12227964 wrote:

                          Setting the arraysize to an higher number seems a good idea. I've tested it( arraysize = 50) and the amount of roundtrips lowered to a 197 but it did not improve the performance.. i test without network in between so this would lower the effect of the amount of packages send.
                          Correct - it only effects the transport layer as it deals with the size of the server's response to the client.

                          I still think a parameter is wrongly configured.
                          Why? Any evidence to back that up? Or simply because it is seen as being a lot easier to turn some knob, or throw some switch, to (magically) fix a so-called performance problem?
                          I've also tested Oracle with ODBC drivers but this does not make much difference. (with benchmark factory) .
                          Unlikely to make any difference, as irrespective of the high level driver (ODBC/ADO/etc), the same low level OCI driver is used.
                          Furthermore i can say that inefficient SQL and poor datamoddeling are not the problem in this stage.
                          Where's the evidence of that? I'm actually surprised to see this comment as in my experience 99% of the time that is exactly what the reason for poor performance is.

                          Oracle is fast.. it performs well and scales well. Think of it as a high-end sportscar. You can't simply jump in the driver seat and expect the car to perform well. You need to know how to drive the car, the difference between standard automatic gearing and sports gearing (or using the paddles), what a racing line is, how to enter and exit a corner, where and when to use braking, etc.

                          If you use Oracle correctly, there is unlikely to be a performance problem caused by Oracle itself. Instead you will run into actual real h/w limitations (e.g. PCI card can only sustain this thruput rate and thus limits how fast Oracle can read data).

                          I have not seen any evidence in this thread that points to any kind of Oracle specific performance problem or incorrect parameter/config setting.

                          Otoh, I can provide you with real world evidence how Oracle performs.
                          SQL> set timing on
                          SQL> select count(*) from daily_xxxx;
                          
                            COUNT(*)
                          ----------
                          2237739276
                          
                          Elapsed: 00:00:10.28
                          This counted 217,678,918 rows per second. How? By using the tools that Oracle provide to design performant and scalable databases.
                          • 10. Re: Oracle performance, slow for larger and more complex results.
                            748848
                            Billy  Verreynne  wrote:
                            user12227964 wrote:
                            Why? Any evidence to back that up? Or simply because it is seen as being a lot easier to turn some knob, or throw some switch, to (magically) fix a so-called performance problem?
                            I am searching for parameter yes because i am trying to make my Oracle db function better, i know that Oracle is a sportscar. I want to run it as a sportscar..but im at a loss whats wrong with my database, the count for example works quite fast too but the results of the previous mentioned id query come out too slow in my opinion.
                            >
                            COUNT(*)
                            ----------
                            2237739276

                            Elapsed: 00:00:10.28
                            This counted 217,678,918 rows per second. How? By using the tools that Oracle provide to design performant and scalable databases.
                            select count(id) , resulted in 3669015 counted id's.

                            The database counted 18,345,075 rows per second without binded variables , which is ten times slower as your result. This can be possible because of hardware but my question is specifically about the number of rows returned thus large amount of results.

                            When I select 1 id

                            Select id from map where id <= 1

                            4000 rows per second are selected,

                            When I select 100 ids:

                            Select id from map where id <= 100

                            the speed is 25425 rows per second for my Oracle database.
                            Another database i testes returns 6 times 25425 rows back per second for the same query (100 ids). What could be a parameter that limits the output speed of multiple rows in a query?.

                            thx

                            Edited by: Sand on 27-jan-2010 4:46
                            • 11. Re: Oracle performance, slow for larger and more complex results.
                              Billy~Verreynne
                              Sand wrote:

                              select count(id) , resulted in 3669015 counted id's.

                              The database counted 18,345,075 rows per second without binded variables , which is ten times slower as your result. This can be possible because of hardware but my question is specifically about the number of rows returned thus large amount of results.
                              The idea was not to compare the speed of "+select count(*)+" statements - but to illustrate that even when dealing with a huge number of rows, one can decrease the amount of I/O that needs to be performed to deal with that number of rows.
                              Select id from map where id <= 1
                              4000 rows per second are selected,
                              Rows/sec is a meaningless measurement - due to physical I/O (PIO) versus logical I/O (LIO). You can select a 100 rows and these require PIO. Resulting in an elapsed time of 1 sec. You can select a 1000 rows that require only LIO. With an an elapsed time of 0.5 sec.

                              Is the 2nd method better or faster? No. It simply needed less time to be spend on I/O as the data blocks were in the buffer cache (memory) and did not require very slow and expensive disk access.
                              Another database i testes returns 6 times 25425 rows back per second for the same query (100 ids). What could be a parameter that limits the output speed of multiple rows in a query?.
                              Every single row that needs to be read/processed by a SQL statement has a cost associated with it. This cost is not consistent! It differs depending on how that row can reached - what I/O paths are available to find that rows? Does the full table need to be scanned? Does an index need to be scanned? Is there a unique index that can be used? Is the table partitioned and can partitioning pruning be applied and local partition indexes used? Are there are user functions that need to be applied to the row's data? Etc. Etc.

                              All these together determine how fast the client gets a row from the cursor executing that SQL.

                              The more rows you want to process, the bigger the increase in the cost/expense - specifically more I/O. As I/O is the biggest expense (slowest ito elapsed time).

                              So you want to do as little I/O as possible and read as little data as possible. For example, instead of a full table scan, a fast full index scan. For example, instead of reading the complete contents of a 10GB table, reading the complete contents of a 12MB index for that table.

                              I suggest that you read the Oracle Performance Guide to familiarise yourself with basic performance concepts. Use http://tahiti.oracle.com for finding the the guide for your applicable Oracle version.