This content has been marked as final. Show 25 replies
Again, it is not an issue of whether to use a view or procedure. It is an issue of how to use Oracle. In essence, it is how SQL objects (like a view) need to be used - directly, or indirectly via an abstraction interface.
It is also not an issue of whether your development toolset can generate SQLs for you. If anything, that is a disadvantage. SQL is not the implemented same across RDBMS products. Oracle SQL differs from mySQL SQL and that differs from Sybase SQL and so on.
Which means your tools are using the lowest common denominator for SQL. Which is a problem as this makes it only able to do the basics and nothing more. For example, can your tool create a SQL to determine the shortest path from a table where rows identifies nodes in a network? Guarantee, not. Which means your approach would be pretty much a hack of pulling data from the server into the client and doing the shortest path calculation there - which would not be a performant or scalable approach.
Relying on developer tools and query wizards to generate SQL for your application development? That is just plain wrong and very silly.
SQL is an incredible powerful and flexible language. A few lines of SQL code can do what a large program unit does in your client programming language. And it can do it faster and better then what you can do, irrespective of how good a programmer you are in that client language.
And this is part of the reason why SQL development should be done by developers that can design and write proper SQL code - and wrap that into a procedural interface for client developers to use (including themselves).
As for abstracting the client from database changes via a procedural interface. You are missing the point referring only to changes to the SQL projection.
There are numerous reasons. Performance can be addressed by changing the execution plan via hints in the SQL. No client code changes needed.
A data model change introducing new relationships - the SQL projection stays the same but the table joins and predicates now differ. No client code changes needed.
The SQL language is also not static. New features are a fact. Such new features (e.g. using analytical SQL, XML SQL, modelling SQL or recursive SQL) can be used and the existing SQL completely rewritten to be significantly faster and more capable. Keep the SQL projection the same and no client code changes are needed.
Debugging, auditing and instrumentation. As all client app access is via a well defined PL/SQL interface, it is easy to add instrumentation, debugging and the like to the interface. Switch it on an client app calls to the interface can be recorded together with performance statistics, filter criteria and so on.
There is also another important factor - client programming consistency. You can have 20 different applications using the same database. Via a PL/SQL interface, all 20 will be using the SAME logic, SAME data model and SAME SQL. No issue of one application doing things differently because of client developers not understanding SQL, the data model or database. No user seeing different results for the same base question in different applications.
This approach has nothing to do with project size. Nothing to do with separation of DBA from developers or whatever (what does this btw mean? - the DBA does not design and write PL/SQL interfaces for client applications).
This is about the most robust and flexible and accurate interface into the database. Something that I have used and seen being used. From small 1 man development teams to high-end commercial applications. In database products ranging from SQL-Server and Sybase, to Oracle. As far back as the mid 90's.
Your scepticism in this regard is unwarranted - and perhaps rooted in ignorance.
I didn't really expect you to agree, but most of your reply rests on a reductio ad absurdum argument, i.e. that if I suggest that I allow a client program to write the SQL to filter or order a view that I am suggesting that there is never a need to write views using proprietary extensions to SQL or write stored procedures. I'm not.
Note that in the first place we were talking about using views rather than tables, and you can build as many of those bits of clever SQL as you like into the View.
Of your reasons for preferring an SP over a view, I see the debugging, auditing and instrumentation reason as valid.
Your data model change I already suggested that I find it unlikely that the same reason that forces a change in the data model doesn't also have an impact on the client interface.
Take a concrete example. Say you have a doctor's office has a list of appointments. There's a view so you can see doctor names and patient names. The client program shows the list, allows you to sort by time, doctor, patient name, limit by doctor, by time range etc.
One option is: I give my program information about the columns in the view and have it create select statements with the required WHERE and ORDER BY clauses, or I could take your approach and have it call a stored procedure which returns the required subset of the view, which might look something like:
getapptsubset(thelist OUT REF CURSOR,Limittype IN number, LimitValue IN varchar2, orderspec IN number)
and in that stored procedure I then either use dynamic sql to put together the query or have a bunch of case statements to select from available statements. and that's before I have even thought about there being two possible filter variables, or worried about whether it is sensible to pass all the limits as a character string, and I've restricted the number of ways the data can be ordered. And maybe I also needed a couple of extra parameters because everything would be limited to a particular doctor's office and a particular date.
I've done a bunch of extra work, I've got an ugly stored procedure, and I've imposed some probably unwanted restrictions on how people can examine the information. I think I would have to have a reason to go that route. Maybe it would be auditing or something, but without that additional reason I can't see why I would do it.
JonWat wrote:Jon, you are still going to write code either way.
One option is: I give my program information about the columns in the view and have it create select statements with the required WHERE and ORDER BY clauses, or I could take your approach and have it call a stored procedure which returns the required subset of the view
If you are a decent programmer, you would modularise your code (which I'm sure you do :-) ) e.g. creating a procedure/function/method in your client programing language where you can pass parameters to (e.g. for sort order) and it returns the required data result.
All I am suggesting is to move that code into a PL/SQL stored procedure (or even better, a package).
This simply moves the logic and code for that functionality from the client language to the server language.
And the reasons for wanting to do that are the ones I've already listed - from re-usability of that code by other client apps and report writing tools, to abstracting technical complexities of the database (like performance tuning) from the client code.
The approach is not complex and not difficult. In the days of Visual Basic, PowerBuilder and Delphi (prior J2EE and .Net), doing SQL-Server and Oracle client-server development, many a VB/Delphi programmer I dealt with was also the person that wrote the T-SQL or PL/SQL procedure in the database. And I cannot ever recall a developer saying that this was not a good idea. It made their client application development a lot easier - and often support actions only needed the T-SQL or PL/SQL procedure on the server side to change. They could even debug what the user was doing via the client by having the server procedures record their actions and parameters received from the client app.
The same reasons and advantages for this approach, are still relevant and still apply today.
To write a view or stored procedure depends on your application requirement:-
Case 1:- If you want to do any operations on the data that is fetched by view in the front end, better go with stored procedure and do the data operations in db itself.
Case 2:- If your application is Pure OLTP Application where a millisecond counts, and the view is called several times per second,Better you use view only in this case
because if you use stored proc in this case there will be context switches between pl/sql engine and sql engine which will consume a little bit of time.
Case 3:- If you just want to execute the view and display the resultset in the fornt end either with large result set or small result set and if the view is called only a few times
per hour you can either go with proc or a simple view depending on the front end developer comfort.
In case 2, the transit of TCP packets between client and server processes have a greater impact than the context switch in the PL/SQL procedure to the SQL engine. So I would not look at the context issue specifically as performance issue to address.
The main performance benefits in this would be for the client to use bulk binding (also supports Oracle PL/SQL calls) and a reusable cursor handle (for the PL/SQL call). From code I've written last year (collecting real-time network data and inserting into Oracle), there were negligible difference in performance due to the context switch - and using a PL/SQL procedure allowed logic to be moved into the stored proc that was a lot more expensive to run on the client (as these required using SQL cursors too and resulted in having to ship additional data to the client for validation prior to the insert).
The TCP latency issue however remains IMO the major factor - as s/w wise, there is very little you can do in your client and server code to address that. If another process is using that same network segment to scp/sftp a large file, impacting bandwidth and increasing latency - nothing your code can do about that. And if there's one fact about networks that is always true, it is that network utilisation is always increasing as more clients and servers use that network, and as more and larger data volumes are exchanged on that network. Networks do not get faster - they get slower. Until the next major (and very expensive) network infrastructure upgrade, where, right from the start, network utilisation simply keeps on growing as before.
Even if we use stored proc we will be having the same the transit of TCP packets between client and server processes ?
If context switches is not an issues can you please let us know why oracle has introduced bulk bindings in PL/SQL ?
user9113644 wrote:Yes. The client needs to typically pass parameter data - be that for SQL code or PL/SQL code.
Even if we use stored proc we will be having the same the transit of TCP packets between client and server processes ?
Very simplistic example. Client calling a SQL insert directly using the following cursor:
The client needs to pass 5 data values to the cursor (on the server) as input data. The cursor processes this data - in this case, performing an insert.
insert into footab values( :1, :2, :3, :4, :5 )
If the insert is done via a PL/SQL proc, the client would need to call that proc:
Again, the client needs to pass 5 data values as input data. The cursor process is different (it will execute a PL/SQL proc) - but from the client side, there is no difference.
begin FooTabInsert( :1, :2, :3, :4, :5 ); end;
The client still creates a cursor with 5 bind variables. When executing that cursor, 5 data values still need to be passed as bind values to that cursor.
From a networking perspective, no difference either. 5 data values need to be transmitted via TCP to the server.
What do make a difference is that often the SQL statement is pretty large (especially complex SQL select statements). In comparison, calling a stored proc usually requires less code to be send to the server to create the cursor.
What also makes a difference is array binding. So instead of 10 calls sending 5 values each time, a single bulk call with an array of 10x5 values. Oracle will then execute that cursor 10 times, with each execution getting the next 5 values from the array that the client passed.
If context switches is not an issues can you please let us know why oracle has introduced bulk bindings in PL/SQL ?I did not say it was not an issue. I said that in this specific case, the time for a context switch is far less than the transit time for the data from the client to the server. Which is why array binding by the client is more important than reducing context switching.
Also, instead of the client having to call 3 cursors to achieve the desired result, those 3 SQL statements can be incorporated into a single PL/SQL procedure. This means the client now needs to make a single call (and send data once across TCP) - with the PL/SQL proc tasked to execute those 3 required SQL statements in succession.
Again, in this case the context switching is far less expensive than dealing with multiple calls and data sends across TCP.
Remember that this is not an issue inside PL/SQL itself. A PL/SQL fetch and bind ships data across memory boundaries inside the same server process. There is no network latency issue.
In typical client-server, a fetch or bind requires the client to ship the data across process boundaries and machine boundaries. Context switching on the server side is far less of a performance factor in this case as the bottlenecks exist elsewhere. (not to say do not reduce context switching in this case - but do not expect the type of performance gains achieved where TCP is not a factor)
Thanks for you reply.
So if it is a Pure OLTP Application then we need to consider the option of going for Simple view as data transfer process between client and server is same
for both procedure and view.
There are a few basic reasons for using a view.
You can take a very big and complex SQL and turn it into a view. So instead of the client sending a big massive SQL statement to the server, it sends a much smaller statement using a view. So a view can be used to improve network performance by decreasing the size of the client SQL send to the server to execute.
A view can hide data model complexities. It can do the tricky joins and even some data transformation in the SQL projection. So a view can decrease complexity by providing a level of abstraction.
A view can be used to modularise SQL code. So instead of repeating the same large chunk of SQL logic in different SQL statements, the chunk of logic can be turned into a view and the view used instead.
Views can be used to implement a security layer. So that when you select (as employee) from the EMP table view, you cannot see salaries. Or only employees in your branch or department. But when you select from the same view as a manager, you can see salaries and all employees.
So there are a number of different reasons for using a view. However, a view has very little to do with "passing data between a client and server". It does not make the data transfer less. A view SQL select with 5 bind variables is not better than a table SQL select with the same 5 bind variables. The bind process is the same. The bind process will be equally fast or equally slow.
The only "data transfer" impact that may exist is that the view SQL select will be usually smaller than the table SQL select. And this impact will only be significant ito performance when the client runs many SQLs per minutes and transfer a lot of SQL select statements across the network to the db server.
A view can hide data model complexities. It can do the tricky joins and even some data transformation in the SQL projection. So a view can decrease complexity by providing a level of abstractionOur applications provide data inhouse to a service bus mostly by views. Some are performance optimized for returning multiple rows, others for a single row (don't call them without the proper where clause, you will not be able to await the first row to return and most likely get an out of memory error)
We (the db developers) can trust the Java developers, that they will use the views only for the appointed purpose, that data are called only from the views (no other permissions granted) and that views are not joined. It's like a contract: they tell us what they need and they get a high-performance SQL. The only freedom is that they can choose the ordering, even the allowed where clauses are agreed upon. They know that we get very angry if they mess things up and bring down the production database ;-)
Of course this can be achieved with procedures too, though it would be more work on our side to provide flexibility, e.g. in ordering the results.
Nevertheless this is not the way to go if you provide a public API to your application. You don't know who will access the views, maybe even with an OR-Mapper generating horrible SQLs on top of your carefully designed views. For a public API I would always prefer procedures.
It depends on your requirement.
View is enough if you want to just display the data as it is from multiple tables.
Stored Procedures has lot of features, like the below
1) you can use control statements like IF, WHILE, FOR, etc.
2) exceptional handlers (runtime error handling)
3) passing parameters