6 Replies Latest reply: Jun 17, 2008 2:23 AM by Billy~Verreynne RSS

    Can I refer a pipelined table function over a dblink

    60716
      Hi,

      I need to select from a pipelined function in another database.
      I have the same pipelined function in my own schema and this is how I'm selecting

      SELECT *
      FROM TABLE (my_utils.getId('1609827777',NULL,'542973111','20080102',NULL));

      ABC_ID XYZ_ID OPQ_ID STA_DT END_DT P_FLG TYPE FLAG
      1609827777 XX30292 542973111 20080102 99991231 N 1 N

      What if the same pipelined function is in another database and
      I have a dblink called OTHERDB1.me.org

      Any input on this will be much appreciated.

      Thanks in advance
        • 1. Re: Can I refer a pipelined table function over a dblink
          damorgan
          In far less time than it took to write your question you could have:

          1. Created a DB link
          2. Created a pipelined table function
          3. Tried it.

          I would suggest you do just that so that you can find out the answer for your unstated version of the product.
          • 2. Re: Can I refer a pipelined table function over a dblink
            60716
            Thank you for the response.

            We are running Oracle Database 10g Enterprise Edition Release 10.2.0.1.0

            I have created dblink, pipelined table function in another database already.
            The issue I'm having is that I doubt whether pipelined table function supports dblink.
            I could not find any documents online that refers to the dblink use.
            If yes, I don't know the syntax for it.

            So I'm not able test or try it. If you know the syntax pls do let me know

            Thanks
            • 3. Re: Can I refer a pipelined table function over a dblink
              Mariusz Olejnik
              It isn't possible to select from remote pipelined table function (over dblink).
              But I found some workaround.

              Create package "pip_pkg":

              - global parameters as variables:
              gv_param1 type1 := null; --[...]
              gv_paramN typeN := null;

              - one setter function for all global parameters (always return 1 as a result):
              function set_params(
              in_param1 in type1, --[...,]
              in_paramN in typeN
              ) return number;

              - pipelined function, with default parameters value from global variables:
              function get_resultset(
              in_param1 in type1 default gv_param1, --[...,]
              in_paramN in typeN default gv_paramN
              ) return tab_resultset pipelined;

              Create view as select from pipelined table function without parameters
              CREATE OR REPLACE VIEW PIP_PKG_VIEW AS SELECT * FROM TABLE(pip_pkg.get_resultset) t;

              Now we can execute SQL using condition to set parameters:
              select * from PIP_PKG_VIEW where pip_pkg.set_params(1,2)=1

              Bingo! We can execute this SQL over dblink too!

              Limitation:
              Once set package global variables stay keep value in the entire session.
              We can use this view in many independent sessions at the same time, but we should not to open many cursors on one view in one session.

              More tips&trics: Pipelined table functions tips&trics
              • 4. Re: Can I refer a pipelined table function over a dblink
                Billy~Verreynne
                > I need to select from a pipelined function in another database.

                The TABLE SQL function is a local SQL engine function - it cannot access a remote pipeline procedure. Thus you cannot use a remote pipeline function directly in your select.. as you need the TABLE() function to run remotely.

                To achieve this you need to create a view on the remote side that includes the TABLE() function - this will thus ensure that the TABLE() and pipeline are both run together on the remote database.

                Problem: with the TABLE() and pipeline in the remote view, how do you pass variables to it?

                The view now has to have the parameter values for the pipeline - you cannot pass that directly anymore as you reference the remote view and not the remote pipeline itself.

                Solution: the view can use context variables, e.g.
                create or replace view foo as select * from TABLE(foo_pipe( SYS_CONTEXT('workspace','empID') ))
                A context is thus needed on the remote database and a procedure that allows this context to be populated with name-value pairs at will. (unlike a trusted context for FGAC and VPDBs).

                So before you can use the remote view, you first need to pass the name-values across into the remote context, for that remote view to use.

                This is not really an issue - as you can implement that as a local pipeline table. Where the local pipeline accepts the values, call the remote database to populate the remote context, and then perform a select on the remote view, piping the results to the caller.
                • 5. Re: Can I refer a pipelined table function over a dblink
                  Mariusz Olejnik
                  Solution: the view can use context variables, e.g.
                  Difference between Billy solution and my solution is in method of passing parameters. Good idea Billy!
                  But I prefer my way - more like encapsulation...
                  • 6. Re: Can I refer a pipelined table function over a dblink
                    Billy~Verreynne
                    Using PL/SQL static vars requires session state for the PL/SQL engine. And as the SQL engine is used as access method (running a SQL on a view on a pipeline), I think it more robust to rather use SQL state (aka a SQL context).

                    Also, it makes the usage and methods fit the whole SQL concept (FGAC & VPDB and "parameterised" queries) a lot better IMO.

                    Why re-invent the wheel (using PL/SQL static vars for keeping state) when Oracle already provides that feature in the form of an Oracle Context?