This content has been marked as final. Show 3 replies
did you ensure that you've gathered fresh and valid statistics for those tables, or did you try a hint to enforce index-usage?
But it could be that bad anyway with that amount of data to be processed in one step. XE doesn't only have the 11 GB data limit, but also uses a maximum of 1 GB of memory in total (SGA+PGA). I'd expect your queries will both exceed that value.
"Partitioning breaks bottlenecks" => You could try to split your "todelete"-table into multiple smaller parts, and to ensure they are in memory to keep speed, you could use a temporary table for that purpose: use a loop to iterate over a block of perhaps 250000 entries and delete using that block as reference.
I know, this sounds complicated, but after all, that's part of what paid versions are supposed make life easier...
Because this would take more time to complete:
at each row he will make a full scan of todelete.
I also tried
delete from borabiec.abiec_animal aa where aa.id in( select td.id from borabiec.todelete td where td.id=aa.id);
but it is slow as well.
(and my experience tells me that exists is faster than a subselect with an IN)