Discussions
Categories
- 17.9K All Categories
- 3.4K Industry Applications
- 3.3K Intelligent Advisor
- 62 Insurance
- 536.1K On-Premises Infrastructure
- 138.2K Analytics Software
- 38.6K Application Development Software
- 5.7K Cloud Platform
- 109.4K Database Software
- 17.5K Enterprise Manager
- 8.8K Hardware
- 71.1K Infrastructure Software
- 105.2K Integration
- 41.6K Security Software
Parameterized Views
Comments
-
Hello, Sven, my old friend. You and I do like to disagree from time to time, it seems! (But at least we agree on our “up vote” decision.) Look at the SQL statement below. I don’t need to say what it does. Suffice it to say that our good friend Stew Ashton wrote it to meet the requirements I explain in my “Transforming one table to another” talk and whitepaper—and that none of my exhaustive testing has found it ever to give the wrong answer.
Notice that the text of the statement is about seventy lines. Of course, that’s nothing compared to some statements we see. But it’s still a lot. Notice too that the placeholder :Nof_Chunks is mentioned nine times. I can’t see any way to expose it as a projected column against which I could express a restriction—as you argue I should be able to do. Can you?
I wanted to test the correctness of this query by running it by binding lots of different actual values to the placeholder. Of course, I found a way by encapsulating it as static SQL in a PL/SQL subprogram. But just think how more natural it would have been to hide all this complexity from sight in a single view parameterized by Nof_Chunks.
with
Extents_With_Sums as (
select
Sum(Nof_Blocks) over() as Total_Blocks,
Sum(Nof_Blocks) over(order by Dobj#, File#, Block#) - Nof_Blocks as First_Extent_Block,
Sum(Nof_Blocks) over(order by Dobj#, File#, Block#) as Next_Extent_Block,
e.*
from Extents e
),
Sum_Blocks as (select Total_Blocks v from Extents_With_Sums where rownum = 1),
Assert as (
select
x.Error_On_Sql_Assert_Failure(
case
when :Nof_Chunks > 0 and :Nof_Chunks <= Sum_Blocks.v then 1
else 0
end,
'Assert failed. Sum_Blocks: '||To_Char(Sum_Blocks.v)||
'; Nof_Chunks: '||To_Char(:Nof_Chunks)) as v
from Sum_Blocks
),
Filtered_Extents as (
select * from (
select
Width_Bucket(First_Extent_Block-1, 0, Total_Blocks, :Nof_Chunks) as Prev_Chunk,
Width_Bucket(First_Extent_Block, 0, Total_Blocks, :Nof_Chunks) as First_Chunk,
Width_Bucket(Next_Extent_Block-1, 0, Total_Blocks, :Nof_Chunks) as Last_Chunk,
Width_Bucket(Next_Extent_Block, 0, Total_Blocks, :Nof_Chunks) as Next_Chunk,
e.*
from Extents_With_Sums e), Assert
where Prev_Chunk < Next_Chunk
and Assert.v = 1
),
Expanded_Extents as (
select
First_Chunk + Level - 1 as Chunk#,
Prev_Chunk, Next_Chunk,
Dobj#, File#, Block#,
Total_Blocks, First_Extent_Block
from Filtered_Extents
connect by First_Extent_Block = prior First_Extent_Block
and prior Sys_Guid() is not null
and First_Chunk + Level - 1 <= Last_Chunk
),
RowIDs as (
select Chunk#,
case when Chunk# > Prev_Chunk then
CharToRowID(Rid.Create_Min(
Dobj#,
File#,
Block# + Floor(((Chunk#-1) * Total_Blocks)/ :Nof_Chunks - 1) + 1 - First_Extent_Block))
end as Start_RowID,
case when Chunk# < Next_Chunk then
CharToRowID(Rid.Create_Max(
Dobj#,
File#,
Block# + Floor((Chunk# * Total_Blocks)/ :Nof_Chunks - 1) - First_Extent_Block))
end as End_RowID
from Expanded_Extents
)
select
'Extent_Based_Sql_Ashton_2',
Chunk#,
Min(Start_RowID) as Start_RowID,
Max(End_RowID) as End_RowID
from RowIDs
group by Chunk#
Dear Bryn,
I think we agree most of the times. But naturally the discussions start in areas where we have different points of view about something.
It doesn't necessarly mean that we disagree, often it is more of a "it depends" thing.
I personally love those discussions and even if we differ a little I always understand and respect your reasoning.
The point I was trying to make with the previous writing, is that I did not think "it is self evident that formal parameters are missing in SQL.". And I thought quite some time about it.
I was trying to write down those thoughts but I'm not sure if I was able to make my point of view understandable.
You brought up an excellent example where a parameterized view seems to be a very useful concept.
Which made me think about:
- What is the general feature of this type of query, so that we feel the need for a parameterized view (PV)?
- And are there pure SQL solutions or approaches which can do the same as a parametrized view? Which doesn't automatically means that one alternative is better than the other. But they would offer an additional tool in our programming toolbox.
The answer to the second query is yes. But I need to find some time to test this and post the query here.
Your example is a case where we do not have a clear mapping of a parameter to a database table column.
This is partially relevant for the performance aspect, like pushing the predicates down through the view into binds on lower level (e.g. a table column).
Additionally since we do not know the value of the parameter up front, we have trouble providing a view that exposes access to this parameter via a view column.
This seems to be a decisive difference to other types of queries: A parameter that needs to be used inside a "complex calculation" (or CC for future references) or a parameter that has no distinct value points.
It is not a trivial task to formulate a query in a way that this parameter will be a view column.
The reason is imho that part of this CC needs to be reversed, which can be difficult up to nearly impossible to do.
Especially this is true, if the CC involves data from several rows.
I try to show what I mean with a simplified examples. All examples will run on http://livesql.oracle.com. In case I used bind parameters, we need to put the exact values there.
Example 1)
Let's start with an easy task. We want to see the count and summed salary for all employees in one department.
We can argue that a parameterized view would solve this problem.
select * from v_department_overview(:departmentID);
However for such a simple requirement most of us would instead provide a grouped view and pass the parameter as where clause into the view.
create or replace view v_department_overviewas select d.*, count(e.employee_id) nof_employee, sum(e.salary) department_cost, round(avg(e.salary),2) avg_salary, l.city, l.state_province, l.country_idfrom hr.departments dleft join hr.employees e on e.department_id = d.department_idjoin hr.locations l on l.location_id = d.location_idgroup by d.DEPARTMENT_ID,d.DEPARTMENT_NAME,d.MANAGER_ID,d.LOCATION_ID,l.city, l.state_province, l.country_id;select * from v_department_overview;
DEPARTMENT_ID DEPARTMENT_NAME MANAGER_ID LOCATION_ID NOF_EMPLOYEE DEPARTMENT_COST AVG_SALARY CITY STATE_PROVINCE COUNTRY_ID 270 Payroll - 1700 0 - - Seattle Washington US 90 Executive 100 1700 3 58000 19333.33 Seattle Washington US 80 Sales 145 2500 34 304500 8955.88 Oxford Oxford UK 240 Government Sales - 1700 0 - - Seattle Washington US 220 NOC - 1700 0 - - Seattle Washington US 60 IT 103 1400 5 28800 5760 Southlake Texas US 200 Operations - 1700 0 - - Seattle Washington US ...
select * from v_department_overview where department_id = :departmentID;
As we can see, this is a typical case where a column in a view is used exactly like a parameter.
Most of the times I would not see the need to create a parameterized view for such a case.
Although I myself previously gave an example where I would diverse from this general rule and prefere a PV instead. E.g. I want to make sure the view is ALWAYS executed with such a filter.
Example 2)
We now want to see all departments based upon two parameters.
- :input_average_salary
- some kind of :confidence_interval
Essentially we want to input an average salary and all departments should be returned that have an average salary that is close enough to our search salary.
This is not excatly what a confidence interval (in statistics) is. But to keep the example simple). Useing a real confidence interval would require a different select and totally change the base view.
We can do it with a select based upon the previous view.
select * from v_department_overviewwhere avg_salary between :input_average_salary * (1-:confidence_interval) and :input_average_salary * (1+:confidence_interval);
or (to be able to run it in livesql)
with params as (select 10000 input_average_salary, 0.15 confidence_interval from dual)select *from params pcross join v_department_overview vwhere v.avg_salary between p.input_average_salary * (1-p.confidence_interval) and p.input_average_salary * (1+p.confidence_interval);
INPUT_AVERAGE_SALARY CONFIDENCE_INTERVAL DEPARTMENT_ID DEPARTMENT_NAME MANAGER_ID LOCATION_ID NOF_EMPLOYEE DEPARTMENT_COST AVG_SALARY CITY STATE_PROVINCE COUNTRY_ID 10000 .15 20 Marketing 201 1800 2 19000 9500 Toronto Ontario CA 10000 .15 70 Public Relations 204 2700 1 10000 10000 Munich Bavaria DE 10000 .15 80 Sales 145 2500 34 304500 8955.88 Oxford Oxford UK 10000 .15 100 Finance 108 1700 6 51608 8601.33 Seattle Washington US 10000 .15 110 Accounting 205 1700 2 20308 10154 Seattle Washington US
What we see here is that part of the (business) logic is written into code.
A parameterized view instead would allow us to reuse this code part and by that improve maintainability.
select * from v_department_overview(:input_average_salary,:confidence_interval);
The second query here also shows an approach how to move the parameters into a subquery and possibly into a table.
Exchange "params" with a real table and instead of a factored subquery.
And then this whole query including params table can be made into a view (without bind parameters).
The same can be done for Bryns chunked query example.
The "assert" block seems to be like a candidate, where we can input an additional table holding all the test cases.
I can try to rewrite it, but currently have not the time to create the needed test case for it.
btw: I'm not happy with the 2nd example yet. I might try to find a better|additional way to show what I want to express.
-------
From a pure mathematical approach, I would state that ALL cases where you want a parameterized view can be expressed by using a (temp) table.
This is because a parameter (or a variable) is nothing else than data. And the natural thing in SQL to put data is a table. Maybe with a different degree of persistence, hence a temporary table.
The columns of this (temp) table then act similar to a parameter declaration.
There is a set of problems where we do not need to put the parameter data into a temp table.
Instead a usually simple transformation from a filter in the where clause to an exposed column in a view is possible.
My first example here is such a case. Also the one that ApexBine posted at the start of this thread, is a good example for such a situation.
Furthermore all cases that we discussed so far, do exist near the context switch from another language to SQL (I called it the bridging point earlier).
This seems typical. The more procedural thinking is part of the solution we implement, the higher the need for parameterized view.
We have to ask ourselves, what is the best encapsulation point? Is it a view or maybe some plsql api instead?
Bryn, you describe that this sql statement is already part of some plsql logic. What are the arguments that made you choose SQL over PLSQL when thinking about a standard interface to this logic?
Writing a plsql api that in turn holds the view with the parameter is fairly easy to do.
This also solves the problem of DRY (don't repeat yourself).
If we restrict ourself into a pure data approach, then the need for parameters diminishes.
Having stated that, I also want to emphasize, that there are situations|examples where procedural thinking is the better approach to a solution.
Regards
Sven
-- formatting work in progress --
-
As already commented, just because something is already "solvable" doesn't remove the benefit of being able to solve it more easily...
Sure I can use SQL to work out the median with this query:
SQL> select deptno, avg(distinct sal) median
2 from
3 (select cp1.deptno, cp1.sal
4 from emp cp1, emp cp2
5 where cp1.deptno = cp2.deptno
6 group by cp1.deptno, cp1.sal
7 having sum(decode(cp1.sal, cp2.sal, 1, 0)) >=
8 abs(sum(sign(cp1.sal - cp2.sal))))
9 group by deptno;
but I generally prefer using this one :-)
SQL> select
2 deptno,
3 median(sal)
4 from emp
5 group by deptno;
So the fact that I can use pipelined functions, or context variables, or global temp tables, or just rely on the optimizer to sort things out (whilst each having their own merits) doesn't strike down the merit of parameterised views.
The biggest problem I find with using pipelined functions is the optimiser cannot (naturally) use any stats. based on one. You have to tell it.
So proper parameterised views would (assumedly / hopefully) overcome this.
-
The biggest problem I find with using pipelined functions is the optimiser cannot (naturally) use any stats. based on one. You have to tell it.
So proper parameterised views would (assumedly / hopefully) overcome this.
FatMartinR wrote:The biggest problem I find with using pipelined functions is the optimiser cannot (naturally) use any stats. based on one. You have to tell it.So proper parameterised views would (assumedly / hopefully) overcome this.
To me, "you have to tell it" means possibly implementing the ODCIStatistics interface (for other readers).
Even then, the "calculated statistics" has to be based on "static values"
Sven:
A generalized problem where Temporary Table based solution would not work is if the "parameters" are rows from another table
select *
from T, f( T.a)
(This is more for the understanding of the problem as a whole)
As I have stated before:
I am expecting that "Parameterized Views" to be implemented as if it was an "SQL Macro".
The largest problem I see for Oracle:
Are Parameterized Views part of the SQL Standard?
-
Dear Bryn,
I think we agree most of the times. But naturally the discussions start in areas where we have different points of view about something.
It doesn't necessarly mean that we disagree, often it is more of a "it depends" thing.
I personally love those discussions and even if we differ a little I always understand and respect your reasoning.
The point I was trying to make with the previous writing, is that I did not think "it is self evident that formal parameters are missing in SQL.". And I thought quite some time about it.
I was trying to write down those thoughts but I'm not sure if I was able to make my point of view understandable.
You brought up an excellent example where a parameterized view seems to be a very useful concept.
Which made me think about:
- What is the general feature of this type of query, so that we feel the need for a parameterized view (PV)?
- And are there pure SQL solutions or approaches which can do the same as a parametrized view? Which doesn't automatically means that one alternative is better than the other. But they would offer an additional tool in our programming toolbox.
The answer to the second query is yes. But I need to find some time to test this and post the query here.
Your example is a case where we do not have a clear mapping of a parameter to a database table column.
This is partially relevant for the performance aspect, like pushing the predicates down through the view into binds on lower level (e.g. a table column).
Additionally since we do not know the value of the parameter up front, we have trouble providing a view that exposes access to this parameter via a view column.
This seems to be a decisive difference to other types of queries: A parameter that needs to be used inside a "complex calculation" (or CC for future references) or a parameter that has no distinct value points.
It is not a trivial task to formulate a query in a way that this parameter will be a view column.
The reason is imho that part of this CC needs to be reversed, which can be difficult up to nearly impossible to do.
Especially this is true, if the CC involves data from several rows.
I try to show what I mean with a simplified examples. All examples will run on http://livesql.oracle.com. In case I used bind parameters, we need to put the exact values there.
Example 1)
Let's start with an easy task. We want to see the count and summed salary for all employees in one department.
We can argue that a parameterized view would solve this problem.
select * from v_department_overview(:departmentID);
However for such a simple requirement most of us would instead provide a grouped view and pass the parameter as where clause into the view.
create or replace view v_department_overviewas select d.*, count(e.employee_id) nof_employee, sum(e.salary) department_cost, round(avg(e.salary),2) avg_salary, l.city, l.state_province, l.country_idfrom hr.departments dleft join hr.employees e on e.department_id = d.department_idjoin hr.locations l on l.location_id = d.location_idgroup by d.DEPARTMENT_ID,d.DEPARTMENT_NAME,d.MANAGER_ID,d.LOCATION_ID,l.city, l.state_province, l.country_id;select * from v_department_overview;
DEPARTMENT_ID DEPARTMENT_NAME MANAGER_ID LOCATION_ID NOF_EMPLOYEE DEPARTMENT_COST AVG_SALARY CITY STATE_PROVINCE COUNTRY_ID 270 Payroll - 1700 0 - - Seattle Washington US 90 Executive 100 1700 3 58000 19333.33 Seattle Washington US 80 Sales 145 2500 34 304500 8955.88 Oxford Oxford UK 240 Government Sales - 1700 0 - - Seattle Washington US 220 NOC - 1700 0 - - Seattle Washington US 60 IT 103 1400 5 28800 5760 Southlake Texas US 200 Operations - 1700 0 - - Seattle Washington US ...
select * from v_department_overview where department_id = :departmentID;
As we can see, this is a typical case where a column in a view is used exactly like a parameter.
Most of the times I would not see the need to create a parameterized view for such a case.
Although I myself previously gave an example where I would diverse from this general rule and prefere a PV instead. E.g. I want to make sure the view is ALWAYS executed with such a filter.
Example 2)
We now want to see all departments based upon two parameters.
- :input_average_salary
- some kind of :confidence_interval
Essentially we want to input an average salary and all departments should be returned that have an average salary that is close enough to our search salary.
This is not excatly what a confidence interval (in statistics) is. But to keep the example simple). Useing a real confidence interval would require a different select and totally change the base view.
We can do it with a select based upon the previous view.
select * from v_department_overviewwhere avg_salary between :input_average_salary * (1-:confidence_interval) and :input_average_salary * (1+:confidence_interval);
or (to be able to run it in livesql)
with params as (select 10000 input_average_salary, 0.15 confidence_interval from dual)select *from params pcross join v_department_overview vwhere v.avg_salary between p.input_average_salary * (1-p.confidence_interval) and p.input_average_salary * (1+p.confidence_interval);
INPUT_AVERAGE_SALARY CONFIDENCE_INTERVAL DEPARTMENT_ID DEPARTMENT_NAME MANAGER_ID LOCATION_ID NOF_EMPLOYEE DEPARTMENT_COST AVG_SALARY CITY STATE_PROVINCE COUNTRY_ID 10000 .15 20 Marketing 201 1800 2 19000 9500 Toronto Ontario CA 10000 .15 70 Public Relations 204 2700 1 10000 10000 Munich Bavaria DE 10000 .15 80 Sales 145 2500 34 304500 8955.88 Oxford Oxford UK 10000 .15 100 Finance 108 1700 6 51608 8601.33 Seattle Washington US 10000 .15 110 Accounting 205 1700 2 20308 10154 Seattle Washington US
What we see here is that part of the (business) logic is written into code.
A parameterized view instead would allow us to reuse this code part and by that improve maintainability.
select * from v_department_overview(:input_average_salary,:confidence_interval);
The second query here also shows an approach how to move the parameters into a subquery and possibly into a table.
Exchange "params" with a real table and instead of a factored subquery.
And then this whole query including params table can be made into a view (without bind parameters).
The same can be done for Bryns chunked query example.
The "assert" block seems to be like a candidate, where we can input an additional table holding all the test cases.
I can try to rewrite it, but currently have not the time to create the needed test case for it.
btw: I'm not happy with the 2nd example yet. I might try to find a better|additional way to show what I want to express.
-------
From a pure mathematical approach, I would state that ALL cases where you want a parameterized view can be expressed by using a (temp) table.
This is because a parameter (or a variable) is nothing else than data. And the natural thing in SQL to put data is a table. Maybe with a different degree of persistence, hence a temporary table.
The columns of this (temp) table then act similar to a parameter declaration.
There is a set of problems where we do not need to put the parameter data into a temp table.
Instead a usually simple transformation from a filter in the where clause to an exposed column in a view is possible.
My first example here is such a case. Also the one that ApexBine posted at the start of this thread, is a good example for such a situation.
Furthermore all cases that we discussed so far, do exist near the context switch from another language to SQL (I called it the bridging point earlier).
This seems typical. The more procedural thinking is part of the solution we implement, the higher the need for parameterized view.
We have to ask ourselves, what is the best encapsulation point? Is it a view or maybe some plsql api instead?
Bryn, you describe that this sql statement is already part of some plsql logic. What are the arguments that made you choose SQL over PLSQL when thinking about a standard interface to this logic?
Writing a plsql api that in turn holds the view with the parameter is fairly easy to do.
This also solves the problem of DRY (don't repeat yourself).
If we restrict ourself into a pure data approach, then the need for parameters diminishes.
Having stated that, I also want to emphasize, that there are situations|examples where procedural thinking is the better approach to a solution.
Regards
Sven
-- formatting work in progress --
Sven W. wrote:From a pure mathematical approach, I would state that ALL cases where you want a parameterized view can be expressed by using a (temp) table.
Of course. However that limits the usefulness of a view as an interface, as well as going against the declarative nature of SQL, if consumers first have to add rows to a temp table before issuing a query. There are all kinds of situations where that is not practical.
-
FatMartinR wrote:The biggest problem I find with using pipelined functions is the optimiser cannot (naturally) use any stats. based on one. You have to tell it.So proper parameterised views would (assumedly / hopefully) overcome this.
To me, "you have to tell it" means possibly implementing the ODCIStatistics interface (for other readers).
Even then, the "calculated statistics" has to be based on "static values"
Sven:
A generalized problem where Temporary Table based solution would not work is if the "parameters" are rows from another table
select *
from T, f( T.a)
(This is more for the understanding of the problem as a whole)
As I have stated before:
I am expecting that "Parameterized Views" to be implemented as if it was an "SQL Macro".
The largest problem I see for Oracle:
Are Parameterized Views part of the SQL Standard?
Mike Kutz wrote:...Sven:A generalized problem where Temporary Table based solution would not work is if the "parameters" are rows from another tableselect *from T, f( T.a)(This is more for the understanding of the problem as a whole)As I have stated before:I am expecting that "Parameterized Views" to be implemented as if it was an "SQL Macro".The largest problem I see for Oracle:Are Parameterized Views part of the SQL Standard?
If the "parameters" are from another table, then simply use this other table in the select. In that case I do not see a probem at all.
I'm not quite sure if I understand the "sql macro" thing correctly. I thought I did and thought the comparison was a bit weird, but maybe I misunderstand your suggestion.
Did you check the new analytic views? They do something very very similar to what you suggested. The aggregation steps are hidden in the view definitions and when doing a filter, then the appropriate aggegation level is used to provide the data.
example:
SELECT time_hier.member_name as TIME, sales, sales_prior_periodFROM sales_av HIERARCHIES (time_hier)WHERE time_hier.level_name IN ('YEAR','QUARTER')ORDER BY time_hier.hier_order;
-
Mike Kutz wrote:...Sven:A generalized problem where Temporary Table based solution would not work is if the "parameters" are rows from another tableselect *from T, f( T.a)(This is more for the understanding of the problem as a whole)As I have stated before:I am expecting that "Parameterized Views" to be implemented as if it was an "SQL Macro".The largest problem I see for Oracle:Are Parameterized Views part of the SQL Standard?
If the "parameters" are from another table, then simply use this other table in the select. In that case I do not see a probem at all.
I'm not quite sure if I understand the "sql macro" thing correctly. I thought I did and thought the comparison was a bit weird, but maybe I misunderstand your suggestion.
Did you check the new analytic views? They do something very very similar to what you suggested. The aggregation steps are hidden in the view definitions and when doing a filter, then the appropriate aggegation level is used to provide the data.
example:
SELECT time_hier.member_name as TIME, sales, sales_prior_periodFROM sales_av HIERARCHIES (time_hier)WHERE time_hier.level_name IN ('YEAR','QUARTER')ORDER BY time_hier.hier_order;
Sven W. wrote:Mike Kutz wrote:...Sven:A generalized problem where Temporary Table based solution would not work is if the "parameters" are rows from another tableselect *from T, f( T.a)(This is more for the understanding of the problem as a whole)As I have stated before:I am expecting that "Parameterized Views" to be implemented as if it was an "SQL Macro".The largest problem I see for Oracle:Are Parameterized Views part of the SQL Standard?If the "parameters" are from another table, then simply use this other table in the select. In that case I do not see a probem at all.
And this is very valid. You'll just need to CREATE VIEW for every "source table" that you use. There is absolutely no problem in that.
The problems occur with maintaining the code while you are on vacation and somebody needs a minor change in the logic of the CC [Complex Code] ASAP.
Example problems that can occur:
- your "backup developer" forgot to change one or two of those views.
- Some Java/.Net developer decided to hard-code a SELECT statement based on your VIEW and didn't tell you about it.
This is where having a "single source" for the CC matters. This is the problem that a Parameterized View is intended to solve (IMHO)
MK
-
Another example I came up against recently is a schema where everything is versioned by business date range. (Also by valid as-at range, but I'll spare you the full horror.) As they are defined as ranges rather than simple "business date" columns, you have to know what date you are querying for in order to join anything. Now we have been given the task of writing a set of views for a data extraction interface. Options are:
- Expose all of the date ranges for the source tables and let users add their own filters.
- Define an application context (requiring a procedural step prior to querying).
- Populate a temp table and join to that within the views (requiring a procedural step prior to querying).
- Have the views join to an existing system table (limiting all queries to the system's "current" business date).
- Replace the views with pipelined functions. As a PL/SQL guy I like this, though users or applications would have to use queries along lines of "select x, y, z from table(somepackage.somefunction(date '2016-05-01'))" instead of the ordinary views they may be expecting, and I expect many third party products will choke on this syntax, and users lose the ability to browse a list of available views. (I suppose third party tools would also take a while to support a new parameterised view syntax, but I expect they would manage it eventually.)
Possibly native SQL support for time series data is what we really need here, but that's for a whole other Idea.
Isn't that exactly what Temporal Validity is addressing?
There you can specify temporal validity in (native) SQL on the SELECT statement:
/* Show rows that are in a specified time period */SELECT employee_id FROM employees_temp AS OF PERIOD FOR emp_track_time TIMESTAMP '2012-12-01 12:00:01 Europe/Paris';
Or even just set a temporal timestamp for all SQLs for a given session:
The following PL/SQL procedure sets the valid time visibility as of the given time.
SQL> EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time ('ASOF', '31-DEC-12 12.00.01 PM');
The following PL/SQL procedure sets the visibility of temporal data to currently valid data within the valid time period at the session level.
SQL> EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time('CURRENT');
The following procedure sets the visibility of temporal data to the full table, which is the default temporal table visibility.
SQL> EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time('ALL');
-
Isn't that exactly what Temporal Validity is addressing?
There you can specify temporal validity in (native) SQL on the SELECT statement:
/* Show rows that are in a specified time period */SELECT employee_id FROM employees_temp AS OF PERIOD FOR emp_track_time TIMESTAMP '2012-12-01 12:00:01 Europe/Paris';
Or even just set a temporal timestamp for all SQLs for a given session:
The following PL/SQL procedure sets the valid time visibility as of the given time.
SQL> EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time ('ASOF', '31-DEC-12 12.00.01 PM');
The following PL/SQL procedure sets the visibility of temporal data to currently valid data within the valid time period at the session level.
SQL> EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time('CURRENT');
The following procedure sets the visibility of temporal data to the full table, which is the default temporal table visibility.
SQL> EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time('ALL');
Gerald Venzl-Oracle wrote:Isn't that exactly what Temporal Validity is addressing?
If there is a way to define a period for a view, sure.
-
Very much in favour. Syntax like PL/SQL-cursors preferred. Main argument for me is the optimizer which often has unaccountable troubles with push_pred to where I want it in complex queries (especially with nested views). Secondary is the need to expose a column just for filtering. Pre-setup (temp-table or context) is just iffy.
-
Maybe a helpful intermediate step would be to enable the usage of (parameterized) PL/SQL cursors as SQL query_table_expressions ?
create or replace package pkg is
cursor c1 (...
end;
select ... from pkg.c1(...)