1 2 Previous Next 20 Replies Latest reply: May 15, 2012 5:17 PM by rp0428 Go to original post RSS
      • 15. Re: How can I reduce buffer cache usage?
        rp0428
        >
        If you insist on using a result table (you dion't need one), it should be a Globally Temporary, which bypasses the buffer cache completely.
        >
        Please provide a supporting citation or documentation for that statement that GTT 'bypasses the buffer cache completely.

        Oracle ACE Director Tanel Poder says otherwise. See his remarks in several replies at this thread
        Why are blocks from temporary tables placed in buffer cache?

        Posted: Oct 25, 2010 11:13 AM
        >
        The reason of putting global temporary table blocks to buffer cache is the same why you put regular table blocks into buffer cache -> you want some of them to be in the memory.
        . . .
        Also, GTTs allow you to rollback, rollback to a savepoint (or implicit savepoint when some error happens during a DML call), and this requires protection by undo. Putting rows / datablocks into PGA would have complicated the implementation even further... right now GTT's are almost regular tables which just happen to reside in temporary files.
        >
        Posted: Oct 26, 2010 12:18 PM
        >
        If the GTT data would be cached in PGA, then if 100 users create 1GB temp tables, you'd end up using 100GB memory in addition to the buffer cache etc - you'll end up with heavy swapping etc.

        But as GTT data is cached in buffer cache like regular tables, then if 100 users create 1GB temp tables, still the same amount of buffer cache will be used (it doesn't magically grow to 100GB in addition to existing size) and you'll just need to flush & reread blocks as needed from temp files like with regular tables.

        So, with buffer cache - tempfile approach you can process (temp) tables way bigger than the amount of memory available, just like Oracle and other disk based RDBMSes have always done.
        • 16. Re: How can I reduce buffer cache usage?
          rp0428
          >
          My understanding is, all the inserts (about 2 million) I am doing to the result table, will consume Oracle's buffer cache. Am I correct?
          Also, the ref cursor which will read data from the result table ( select * from result_table ), will read all those 2 million records in SGA/Buffer cache. Correct me if I am wrong here.
          After the report is written in a file, we simply don't need this data. Next day we truncate the result table and start again.

          All the buffer cache usage by this process will obviously kick out lot of data from the buffer cache which might be usefull. Our system might end up reading that data back from db files (physical read).

          How can I reduce the buffer cache consumption by this report generation process? which will in trun save us lot of I/O.
          >
          You are trying to solve a problem that doesn't exist.

          If if ain't broke, don't fix it. You haven't provided any information that suggests a problem exists.
          >
          My understanding is, all the inserts (about 2 million) I am doing to the result table, will consume Oracle's buffer cache. Am I correct?
          >
          Yes and no. The data will almost certainly pass thru the buffer cache but so what? Will all 2 million rows be in the buffer cache at the same time? Maybe, maybe not. Oracle will decide that.
          >
          Also, the ref cursor which will read data from the result table ( select * from result_table ), will read all those 2 million records in SGA/Buffer cache. Correct me if I am wrong here.
          >
          As said above the 2 million records will pass thru the buffer cache but how many there are in the cache at any one time only Oracle knows. When you process the ref cursor some records may still be in the cache and some may not or maybe they are all in the cache. Why do you care how Oracle uses the cache unless it is causing you a problem somewhere. If it is, post some data that shows the problem.
          >
          After the report is written in a file, we simply don't need this data. Next day we truncate the result table and start again.
          >
          Good. You could also just truncate it right after you generate the report? And if you always put about 2 million records back in the same table then you could use
          TRUNCATE TABLE myTable REUSE STORAGE;
          That way you aren't releasing and then reallocating the same amount of storage day after day. For this type of usage you could also create the table with an intital extent large enough to hold 2.5 million records and then it will only need the one large initial extent rather than possibly having multiple small extents.

          As for making the table a Global Temporary Table? A GTT will also use the buffer cache so that is not a reason.

          The two main reasons to use a GTT instead of a regular table is 1) multiple sessions need exactly the same table and you want/need to keep the data separate or 2) you want to have Oracle automatically delete the data when you are done using it.
          >
          All the buffer cache usage by this process will obviously kick out lot of data from the buffer cache which might be usefull.
          >
          Do you have any evidence of this? I would suggest avoiding the use of words like 'obviously' since, with Oracle, very little is obvious. And anything that might be obvious for one version may not be obvious anymore in the next.

          See the 'Buffers and Full Table Scans' section in Chapter 14, Memory Architecture of the Concepts doc
          http://docs.oracle.com/cd/E11882_01/server.112/e16508/memory.htm

          That section discusses EXACTLY the issue you seem to be concerned about: cleaning out the buffer cache
          >
          A problem is posed by a full table scan, which sequentially reads all rows under the table high water mark (see "Segment Space and the High Water Mark"). Suppose that the total size of the blocks in a table segment is greater than the size of the buffer cache. A full scan of this table could clean out the buffer cache, preventing the database from maintaining a cache of frequently accessed blocks.

          Blocks read into the database cache as the result of a full scan of a large table are treated differently from other types of reads. The blocks are immediately available for reuse to prevent the scan from effectively cleaning out the buffer cache.
          >
          Repeating what I first said:

          You are trying to solve a problem that doesn't exist.

          If if ain't broke, don't fix it. You haven't provided any information that suggests a problem exists.
          • 17. Re: How can I reduce buffer cache usage?
            Mark Malakanov (user11181920)
            How can I reduce the buffer cache consumption by this report generation process?
            You may consider to configure RECYCLE buffer pool and assign your result table to that.
            • 18. Re: How can I reduce buffer cache usage?
              rahulras
              rp0428, Thanks for a detail reply.

              The reason I am asking this question is, this process (rather SQLs from this process) are at the top my AWR report in "SQLs order by reads" and "SQLs order by IO wait". I know that, the process might be reading that data from disks which was not in buffer cache at all.
              As you said, there may not be any problem at all. But, this process runs when our system is most busy (and it has to run at that time). Huge amount of processing happens during a window of 3 hours.

              All I want to do is, reduce all overheads. Or, basically, avoid all that can be avoided. Hence this question.
              • 19. Re: How can I reduce buffer cache usage?
                Mark Malakanov (user11181920)
                Huge amount of processing happens during a window of 3 hours.
                do this or other processes go beyond the window because of poor(?) performance?

                if not - do not waste efforts to fine-tune.
                • 20. Re: How can I reduce buffer cache usage?
                  rp0428
                  >
                  this process (rather SQLs from this process) are at the top my AWR report
                  >
                  So what? In and of itself that is totally meaningless.

                  Obviously if there is a list SOMETHING has to be at the top of it. That doesn't mean that there is a problem. And even if there is a problem it doesn't mean that the thing at the top of that particular list is the cause of the problem.

                  You first need to

                  1. Determine if there is, in fact, a problem.
                  2. Determine what the possible causes of the problem are.
                  3. Identify methods to mitigate the problem

                  Based on the information you have posted I'm not convinced that you have performed step 1.

                  And, as I said in my response, if you need the data for 2 million rows the data is going to use the buffer cache so there isn't a lot you can do about it.

                  So if the thrust of the question is 'how can I query 2 million rows and not use the buffer cache the answer is:

                  fuggetaboutit.
                  1 2 Previous Next