2 Replies Latest reply: Nov 1, 2012 1:27 AM by R o b RSS

    After bulk import - query performance initially very bad but ok next day?

    R o b
      Hi,

      we have noticed twice so far, that after creating a new schema and bulk importing data queries on spatial tables are very slow.
      (We didn't check non-spatial tables though). After importing the data indexes and stats are created.
      The data set is quite small (less 250'000 features spread over multiple tables). The slow performance was noticable hours after creating indexes and stats. But after returning back to work next day query performance was good - as initially expected. No one did anything in the meantime. Database is 11gR2.

      I remember faintly that I read that statistics etc might not be used immediatley after creating/updating but I didn't expect that it takes so long to kick in. Is there an explanation for the behaviour? It is not really an issue for us but I would like to know why it happens,

      Many thanks, Rob
        • 1. Re: After bulk import - query performance initially very bad but ok next day?
          John O'Toole
          Rob,

          Note that since 10g there is an automated DBMS_STATS statistics gathering job which runs overnight. It sounds like this is creating appropriate statistics which result in a good execution plan. Have a look in Enterprise Manager to see the details of this job.

          You mentioned that you gather stats after running your bulk load - I guess the problem is that you aren't gathering the same stats as the overnight job. Can you post the command you run to gather the stats after the bulk load?

          John
          • 2. Re: After bulk import - query performance initially very bad but ok next day?
            R o b
            Hi John,

            many thanks for your reply. Import and gathering stats is done through an application. I have only the following information about what is supposed to happen when we run the statistic gathering process:

            "Optimization - Uses sub-commands for database optimization (DBMS_STATS*). These are important for the Oracle optimizer to generate optimal execution plans.

            Optimize Schema - Runs the DBMS_STATS.GATHER_SCHEMA_STATS(USER,cacade=>true) command. This procedure gathers (not estimates) statistics for all objects in a schema and on the indexes.
            Optimize the schema after you import a industry model from a dump file, and run the Optimize command whenever you have extensive changes to the data, such as imports or major updates.

            Optimize Feature Classes - Runs the DBMS_STATS.GATHER_TABLE_STATS([USER], [Table], cascade=>true) command for the feature class tables. This procedure gathers table, column, and index statistics."

            The application we use allows to the gather statistics in two different places. I now realise that we have only used one of the two so far and if my understanding of the documentation is right the one we have used does not gather all statistics. With your explanation the observed behavouir makes sense. Next time I will gather statistics using the second functionality to see if that one gathers all statistics at once.

            Many thanks again, Rob