This content has been marked as final. Show 5 replies
What can I do?I'd start by giving yourself a better OTN handle. user8773604 isn't a great name.
When posting questions on this forum, it helps if you provide the exact Oracle version, and if possible, try to be fully patched up as you could be hitting a bug that has long since been fixed.
What kind of data is in your target table? Is it simple point data, lines or big nasty polygons? If it is point only, then ensure you are using the 'layer_type=point' index parameter.
Why does it matter that this query takes 12 seconds? Give us some context about the use case that requires it to be less as that will help us help you.
Does the query take 12 seconds every time it is run or was that a one off?
If you are running SDO_FILTER only, then the query should be hitting the spatial index only and not going to the table. Check the execution plan to ensure this is the case.
What SRID is your data in? If it is different to the query window, then Oracle will need to transform the query window. This shouldn't take long, but it may do if run for the first time.
There are various things you could try to help with the performance:
- Clustering the data
- Partitioning the data
- Setting the 'sdo_non_leaf_tbl=TRUE' parameter when creating the spatial index and then pinning the MDNT_...$ table to the keep pool
You haven't really provided very much information in your posting so I'll wait until you respond before saying anything else.
Server version is 184.108.40.206.0
My table contains the cadastral mapping of a big piece of territory, so in my table there are simple polygons
The time of first execution lasts about 2 minutes and the subsequent ones are of 12 seconds
It's a problem since I'm working on a GIS software and every zoom I do, I have to compute how many geometries that are in my window. If there are too many geometries then I don't want to draw them all, because it takes too much time. But 12 seconds for computing how many geometries that are in my window takes too much too.
Actually, every zoom takes different time: it takes like 0s-2s-12s-40s and so on (I ran the query not in my program only, but in sql plus also)
My SRID is in the query: 82087, and it's the same in all my geometries, in the user_sdo_geom_metadata and in the spatial index
I don't really need to use SDO_FILTER, but I think it's the faster way for my purpose
Edited by: user8773604 on 5-feb-2013 0.42
On my 220.127.116.11 database, I ran some SDO_FILTER count(*) queries against a non-partitioned cadastral table I have with 33 million polygons. I used a window that returned about 1 million rows and it consistently returns in under 2.5 seconds once the data is in the buffer cache. Smaller windows with ~100,000 rows return sub second. I think that is pretty reasonable since the table isn't partitioned or clustered.
So first of all I'd suggest that you try to patch up to the latest patchset - currently 18.104.22.168 as there are often performance improvements in the releases as bugs are fixed.
Then ensure your buffer cache is sized appropriately and that the database is running on sensible hardware.
If you're still getting slow response times then I would look at clustering and partitioning the data.
I don't really need to use SDO_FILTER, but I think it's the faster way for my purposeFor the use case you've described, SDO_FILTER is appropriate.
Q: You say that you are trying to count how many geometries there are - and if there are too many you don't want to draw them at all?
IMHO that's a really odd, inconsistent (and inefficient) way to handle too much data.
Normally you limit the scale to which you will allow the layer to be refreshed on the client side for the entire area (you zoom out too far and it just turns off) - OR - you put a limiter in the view / package (e.g. WHERE ROWNUM < 1001) to only return "up to" N number of rows of data to keep stupid users from hammering your database.
You could also use greeking to make a second copy of the layer with simplified details if you really want to see detailed polygons (something is there) - but of course do not need the resolution. With a pipelined function you could decide which data to use based on the input coordinate range and return it like a single layer.