This content has been marked as final. Show 6 replies
Thank you for the response.
We are running Oracle Database 10g Enterprise Edition Release 10.2.0.1.0
I have created dblink, pipelined table function in another database already.
The issue I'm having is that I doubt whether pipelined table function supports dblink.
I could not find any documents online that refers to the dblink use.
If yes, I don't know the syntax for it.
So I'm not able test or try it. If you know the syntax pls do let me know
It isn't possible to select from remote pipelined table function (over dblink).
But I found some workaround.
Create package "pip_pkg":
- global parameters as variables:
gv_param1 type1 := null; --[...]
gv_paramN typeN := null;
- one setter function for all global parameters (always return 1 as a result):
in_param1 in type1, --[...,]
in_paramN in typeN
) return number;
- pipelined function, with default parameters value from global variables:
in_param1 in type1 default gv_param1, --[...,]
in_paramN in typeN default gv_paramN
) return tab_resultset pipelined;
Create view as select from pipelined table function without parameters
CREATE OR REPLACE VIEW PIP_PKG_VIEW AS SELECT * FROM TABLE(pip_pkg.get_resultset) t;
Now we can execute SQL using condition to set parameters:
select * from PIP_PKG_VIEW where pip_pkg.set_params(1,2)=1
Bingo! We can execute this SQL over dblink too!
Once set package global variables stay keep value in the entire session.
We can use this view in many independent sessions at the same time, but we should not to open many cursors on one view in one session.
More tips&trics: Pipelined table functions tips&trics
> I need to select from a pipelined function in another database.
The TABLE SQL function is a local SQL engine function - it cannot access a remote pipeline procedure. Thus you cannot use a remote pipeline function directly in your select.. as you need the TABLE() function to run remotely.
To achieve this you need to create a view on the remote side that includes the TABLE() function - this will thus ensure that the TABLE() and pipeline are both run together on the remote database.
Problem: with the TABLE() and pipeline in the remote view, how do you pass variables to it?
The view now has to have the parameter values for the pipeline - you cannot pass that directly anymore as you reference the remote view and not the remote pipeline itself.
Solution: the view can use context variables, e.g.
create or replace view foo as select * from TABLE(foo_pipe( SYS_CONTEXT('workspace','empID') ))A context is thus needed on the remote database and a procedure that allows this context to be populated with name-value pairs at will. (unlike a trusted context for FGAC and VPDBs).
So before you can use the remote view, you first need to pass the name-values across into the remote context, for that remote view to use.
This is not really an issue - as you can implement that as a local pipeline table. Where the local pipeline accepts the values, call the remote database to populate the remote context, and then perform a select on the remote view, piping the results to the caller.
Using PL/SQL static vars requires session state for the PL/SQL engine. And as the SQL engine is used as access method (running a SQL on a view on a pipeline), I think it more robust to rather use SQL state (aka a SQL context).
Also, it makes the usage and methods fit the whole SQL concept (FGAC & VPDB and "parameterised" queries) a lot better IMO.
Why re-invent the wheel (using PL/SQL static vars for keeping state) when Oracle already provides that feature in the form of an Oracle Context?