1 person found this helpful
There are a variety of options depending on your specific architecture. Your application can use Continuous Query Notification to be alerted when the results of a query change. You can use Oracle Advanced Queues and triggers on the table to queue up messages that can then be processed by your application. You can use Oracle Streams to automatically capture and propagate changes to your application (this is much like AQ without the trigger but the message payload is often a bit more involved and/or you need to add some capture or propagation rules to configure what sorts of messages you want). You could also build solutions on top of Oracle Change Data Capture but that would seem like a poor choice for this specific use case. Depending on the frequency, a trigger that caused a job to run which then called a shell script might be a plausible choice though that would typically be rather inelegant. I'm guessing that there are quite a few incidents in New York every day so I'd be pretty hesitant about that sort of path but you'd need to do the analysis.
If you're not limited to just features of the database but can leverage technologies that are part of Oracle's middleware stack, you have even more options.
1 person found this helpful
Advanced Queuing comes to mind
Another way (without AQ) is to have the database write something to an O/S file (external file) and have a script or process polling for the file's existence and if found notify the vendor that something happened.
I don't know that I would want this in a trigger. The trigger will fire before the commit. Should the vendor be alerted if you roll back the change?
1 person found this helpful
I am not sure AQ is the best option for this type of requirement. It might depend on how the vendor accesses your system.
Via a table row level trigger you can
send email via Oracle smtp (email body could contain data to be processed)
raise dbms_alert signal upon which a batch program is waiting (daemon)
call dbms_scheduler to execute a shell script which processes the target information (advantage of being on-demand, no daemon)
write a file via utl_file for which a OS shell script is watching for
HTH -- Mark D Powell --
Mark provided a good list of options. I do have one additional option...
Create a User Defined Metric in Enterprise Manager. Have the customer receive email notifications based on the alert. How you do this depends on your EM version.
I see a number of interesting solutions here. I have sent a note to the vendor to research DBMS_Alert and DBMS_Pipe for now. I personally will read up on Advanced Queueing and Continuous Query Notifications. I cannot select the "Correct Answer" until we have done some research and selected a solution.
Thank you everybody!
You might consider having an index on a column that goes from not null when open to null when closed. That way, it could be a very small index for them to traverse for open incidents. On the other hand, I have no idea if this would be counterproductive on Exadata.
I don't think anyone mentioned materialized views, which may be similar to what Exadata does.
When you say "Vendor" is that the person writing the dashboard?
I have a different take on what you ask about that the others: I don't believe you have provided (or may not know yet) the complete requirements.
We are attempting to create a dashboard of open incidents.
Ok - but that doesn't tell us ANYTHING about:
1. what the requirements of that dashboard are
2. who will use that dashboard
3. what will they use it for
4. what actions need to be taken based on the information provided.
We want the vendor to query the table of incidents stored in our Oracle instance (currently hosted on a Windows box but soon migrating to exadata).
Sounds simple enough so far!
They can run their query at some regular interval, but the most efficient method is for them to be informed when a new incident has been inserted into the incident table, or when an incident is closed (Incident_close_datetime transitions from "null" to "not null").
And NOW we come to the part I have a problem with. Just how do you know 'the most efficient method'? If you really know that 'the most efficient method' is to PUSH that info out then you must be basing that on information that you haven't provided to us.
You started off talking about a dashboard - why does information need to be 'pushed' to the dashboard? A user queries the dashboard and sees the info that was current at the time of the query. They click on a 'refresh' button if they want the info refreshed.
Why does that user need to be notified immediately as soon as certain new information arrives? If this was an application to dispatch units to a new fire then it makes sense that they need to be notified by the system (PUSH the info) rather than relying on the user to click 'refresh' (PULL the info).
Post info about the 4 issues I itemized above so we have a better idea just what is expected.
In this case, the department has engaged a "vendor" to provide a dashboard solution as part of a larger project.Management has tasked us to supply the necessary information from our dispatch system so their product is useful.
Thank you for asking for this detail. I will take your questions and prepare a thorough answer, then post it.
Just keep in mind that we post questions/issues like that for you to consider; not necessarily for you to answer in a public forum.
I was suggesting that any project plan needs to thoroughly document the actual requirements at that level of detail so that you can measure any solution against those requirements.