But is that a problem relating to partitioning, or a problem related to index design?
Surely the basic premise is that if partitioning is to be ignored by a query, so too should local indexes - in order to do, for example, an index scan of a single global index and not separate scans of 10,001 local indexes?
Partitioning after all is not the selected feature for speeding up global table access (unless parallelising FTS on partition boundaries).
No, I wouldn't call it a problem with index design. The choice between local and global indexes cannot always be made solely based on performance requirements. Information Lifecycle Management requirements may easily top performance. Imagine a multiterabyte table with monthly partitions which have to be truncated, moved,compressed or dropped after a certain period of time. Most likely, you won't be able to define any global indexes on such table because rebuilding them (e.g. after truncating a partition) would be prohibitively expensive. So you're stuck with local indexes simply because you have no other choice.
Now imagine that such a system goes from less to more granular partitioning (e.g. monthly to daily). Queries using local indexes without partition pruning would cost up to 30 times more, and what was suboptimal, but acceptable, may become an absolute disaster.
How do we know that OP's situation isn't like this?
Good point (and one that is in fact relevant in my case - as global indexing is not feasible in my largest database). However, I still do not see a partitioning problem. I see a requirements conflict.
This conflict does not make partitioning bad. It simply says that you need to properly evaluate the conflict and determine on which side you want to focus as the critical part, at the expense of the other.
Claiming however that increasing the number of partitions is performance suicide, without any shred of evidence or justification as backup, as the OP indicated was said to him, is IMO bs.