Skip navigation

I assert that OPA content (excluding the input controls) frequently changes independently of the rules. 

 

For all OPA project architects, please bear with me on this long blog post.  Please just follow along for a bit.  It will be worth it if you take the time.

 

Let's look realistically at an example project included with OPA: the example Warranty project.

 

Realistically, the example Warranty project screens would need to be enhanced with much more information content prior to production.  Examine all the Warranty project screens to see what I mean:

 

 

My point in going through all the above screens is that in many (most?) real-world OPA applications, questions and answers (input controls) are accompanied with content. If the application above were to be published and maintained, the project team would need to anticipate frequent changes to the content independent of the rules.

 

Consider the following guidance:

Intent

Separate the architectural concerns of policy and content, putting policy management in OPA and content management in a CMS.  Policy is realized in rules and attributes collected with input controls.  Content is realized in html paragraphs, images, videos, documents and such. 

Problem

Content is often changed separately from rules, and content may contain separate life-cycle from rules. If content changes require new publishing of OPA policy rulesets, then that introduces additional life-cycle management issues.  Further, content addition directly inside of interviews tends to "clutter" interview screens. Content placed directly in interview screens is not easily shared, managed, and searched outside of the interview screens.

Discussion

An example follows.

 

The problem is most often seen when important notes, textual descriptions, and graphical media types are coded directly into OPA interview screens.  There are even interviews where the interview goal is to direct the interviewee through content.   The content itself frequently changes, but the rules to get to the content do not change. 

 

By using a CMS and having the CMS content show up in-line in the interview, the content can follow a different lifecycle than the rules.  When a CMS is used, the only requirements of the OPA interview screens are the interview control settings to display the content.  Generally, an OPA control that retrieves CMS content via a REST API is utilized.

 

The interview control to retrieve CMS content is written once, and used everywhere in the interview that content is needed.  The interview control can be displayed or not displayed based on OPA visibility rules.

 

As an added benefit, any content media types available in the CMS becomes available in OPA. Additional OPA controls do not need to be written for OPA to handle video or other media types.

 

Whats more?  The content can be formatted in a WYSIWYG editor in the CMS.  That means no more need to put html into labels...

 

Structure

In demonstrations, we tackled CMS integration by having an OPA label extension that displays content retrieved from a CMS REST API.  The interview control properties are set to either search, filter, or directly access content from the CMS.

 

The effect is that in the Interview, labels are placeholders for content and take up very little interview design real-estate.  That allows the interview focus to be on the various questions (inputs) and on the flow in general.  This has been seen to greatly simplify OPA interview screen development where a lot of varying content is involved.

 

Example

I put up a WordPress site (modern WordPress is a CMS for those architects living under a log) with information regarding Warranty.  This information can be changed by the business on the WordPress site with a WYSIWYG editor independently of OPA.  Changes appear real-time in OPA.

 

For OPA, I added a control extension that uses the WordPress API to retrieve content.  Content can be directly retrieved, retrieved via search, or retrieved via filters (keywords).

 

In OPA, the new interview Welcome screen looks as follows (with label properties used for each item of WordPress content):

 

This produces the following interview screen after publishing on the web:

Notice that the red alert, the image, and everything retrieved from the CMS is independent of the OPA rules and OPA interview screen development.  The business has freedom to change the content separately from the rules and input controls.

 

Also notice that the actual OPA interview screen is nice, condensed, and easily read for interview flow.

 

See the huge difference a quick CMS integration provided, compared to the original OPA interview screen shown again here:

This post is short.

 

Trying to be helpful, based on something I saw today I would like to point out that containers can be put to an "extra" use in OPA that is only obvious after it is explained. 

 

We all know containers provide "layout" for vertical and horizontal orientation. See the following: Oracle Policy Automation Documentation Library

 

Did you realize?  Containers provide AND and OR visibility logic to allow multiple "show-if" conditions to apply...

 

If you have 2 attributes (the attribute a, the attribute b) and you only want to display controls depending on values from both, you don't need to write a combining rule like the following example:

Containers within containers are AND logic if each container has "show if" or "hide if" for a different attribute.

 

Containers beside containers are OR logic if each container has "show if" or "hide if" for a different attribute.

That technique can be combined for more complex visibility logic, and I would note that this generally also helps organize similar content together.  All content dependent on the first and the second attribute now have reason to be co-located in the interview.

 

There is no need to create yet another top level attribute with an AND or OR logic just for visibility.  Those things clutter up your data model, in any case.

 

Less is more.  Hope this helps!

Paul Fowler

OPA Popups / Tooltips...

Posted by Paul Fowler Oct 28, 2018

Again, see my disclaimers in the last post.

 

I want to share something from a personal project of mine.  I have a similar need for this type of functionality at work, so I had reason to prototype with a project at home.  It takes a while to figure these tricks out, and I wish someone had posted this OPA code on the Internet.

 

Background: I play D&D with my family, and I have a need to convert D&D 4 encounters to D&D 5e encounters for a campaign that our family has been playing since 2007...

 

Monster/encounter conversion may seem simple, but it isn't simple because D&D 5e comes with its own set of rules for monsters and encounters.  Some of the 5e rules are very involved and thus are not automated anywhere I can find for free on the Internet.  This is the reason manual conversions are so dang difficult.  (Really, I automated about 15 pages of 5e rules.  5e, btw, is considered simpler that 4...)

 

So, I wrote my own encounter ruleset for my family campaign, although I won't share it for fear of legal issues with Wizards of the Coast.  This project is just for our family use anyway...

 

BUT, one challenge that I had was reminding myself all the implications of setting various attributes such as "Hit Die".  I didn't want to clutter up my interview screen with d&d notes...  What to do?

 

Popups:

 

I succeeded with popups in several ways, but I will show a simple method that allows the content of the popup to include attribute values...

 

First, the top of the interview screen is shown here.  Everything in blue has a popup that reminds me of what is possible and impacted per the 5e rules.  Further, the popups have OPA attributes embedded in them.

 

Now a demonstration screenshot of one of the Popups.  (Btw, this popup uses rules to compute the proper hit die per 5e rules when given creature size, constitution, and hp.  The "8d10 + 24" was created with OPA rules and is in an attribute.)

Kind of cool???  Click the blue text and get a popup!  Click the popup and it goes away.

 

In my development tab, it looks as follows:

Notice how I use properties on a label.  Also, note the label/popup has an attribute in it that can change depending on fields in the interview.  That means the text in the popup has to be dynamic... 

 

The javascript:

// Lets make a popup function... Here are the necessary properties:

 

//

// type : Popup

// title : <<Text such as "click me...">>

// id : <<an id for this popup>>

OraclePolicyAutomation.AddExtension({

   customLabel: function(control, interview) {

     if (control.getProperty("type") == "Popup" ) {

     var x;

     var t;

       return {

         mount : function(el){

           x = document.createElement("DIV");

           x.className = "popup";

           x.onclick = function() {

             var popup = document.getElementById(control.getProperty("id"));

             popup.classList.toggle("show"); 

           };

           x.innerHTML = control.getProperty("title");


           t = document.createElement("SPAN");

           t.className = "popuptext";

           t.id = control.getProperty("id");

           t.innerHTML = control.getCaption();


           x.appendChild(t);

           el.appendChild(x);

           },

         update : function(el){

           t.innerHTML = control.getCaption();

          }

        }

     }

  }

});

 

The CSS:

/* Popup container */

 

.popup {

   position: relative;

   display: inline-block;

   margin-left: 10px;

   margin-right: 10px;

   color: blue;

   cursor: pointer;

}


/* The actual popup (appears on top) */

.popup .popuptext {

   visibility: hidden;

   width: 600px;

   background-color: #555;

   color: #fff;

   text-align: left;

   border-radius: 6px;

   padding: 8px 0;

   position: absolute;

   z-index: 1;

   bottom: 125%;

   left: 50%;

   margin-left: -80px;

}


/* Popup arrow */

.popup .popuptext::after {

   content: "";

   position: absolute;

   top: 100%;

   left: 10%;

   margin-left: -5px;

   border-width: 5px;

   border-style: solid;

   border-color: #555 transparent transparent transparent;

}


/* Toggle this class when clicking on the popup container (hide and show the popup) */

.popup .show {

   visibility: visible;

   -webkit-animation: fadeIn 1s;

   animation: fadeIn 1s

}


/* Add animation (fade in the popup) */

@-webkit-keyframes fadeIn {

  from {opacity: 0;}

  to {opacity: 1;}

}


@keyframes fadeIn {

  from {opacity: 0;}

  to {opacity:1 ;}

}

 

Done.

 

PS:

 

Finally, I couldn't resist but use BI Publisher...  (Notice the Hit Die from above populated next to the Hit Points...)  I haven't had time to write rules for special qualities or actions, so ignore those areas.

 

For anyone who wonders, all the coding for D&D 5e creature rules could be done in only two weekends. Try doing that with just Excel...

 

I hope you can make use of this how-to code for popups/tooltips in OPA.   Let me know in comments if you take any of this code and modify it for your own projects, please.

Disclaimers and Background

Yes, there are several examples of OPA controls in javascript this post.  But, my pure "dev days" have been replaced by policy discussions.

 

Disclaimer #1: Forgive my javascript coding if it isn't up to par.

Disclaimer #2: I expect to upgrade this blog post content in the future.

Disclaimer #3: As always, I am not an Oracle employee, so use advice in this blog at your own risk.

 

As an architect, I am well aware that for the end user, the interface is the system.

 

To have a good interface, you must control styling and control the controls themselves.  However, OPA is about policy, not styling.  Lucky for us, OPA interviews are so dang easy to create!  For a "human interface" channel, OPA interview screens are extremely quick and good for the average person.

 

However, I have seen OPA interviews moved to PHP (instead of running in OPA) just due to styling.  Really I have, and I am not just saying that.

 

I have noticed that what every project asks on day 1 is "How do I put our organization's custom CSS for those menus and labels?"

 

Historically, what hasn't been great for OPA presentation UX is this following process:

 

In the old days, OPA modelers presented some finished OPA interview screens in a web browser.  Then the presentation designers ran the interview, parsed the HTML using browser developer tools and created the following (not-opa-recommended) types of jQuery:

 

$(document).on("click",function(event){

 

    if(!($(event.target).is("div.arrow_box") || $(event.target).is(".list-input")|| $(event.target).is(".ui-button-icon-primary") || $(event.target).is(".owd-input")))

        $('.arrow_box').addClass( "my-control-hidden" ); });

 

After that, the reverse engineered CSS was applied.  OPA would be upgraded a few years or months later, and styling things would break.

 

So, what is the OPA recommended UX styling (as of 2018) and how do we give designers more control?  How do designers learn to design to it???

 

On of my proposed solutions: Give your presentation team an OPA style playground project at the beginning of development.

 

May I suggest starting out with a playground OPA project for testing UI controls and styling?  I attached an example style playground.  Download the example and play with it, then hopefully your presentation team can avoid the whole "let's use PHP." 

 

Here are the steps I followed to create my presentation style playground project.

 

Step 1:  Read (and share) the documentation

Bookmark the following after reading them and share them for reference:

 

1) the Web Interview Developement Documentation for CSS/Javascript integration: Oracle Policy Automation Documentation Library

2) the Design Interviews Documentation for the in-program styling: Oracle Policy Automation Documentation Library

 

Note, I am on 18c.  In my experience, it isn't enough to just point designers at documentation, they need to be shown OPA.  Sit down with them for a few hours.

 

Step 2: Add the "custom files" directory to the project

Let OPA know you are going to use custom files.  Go to Interview->styles and click the "Custom Files" button.

 

Agree to the warning...

The directory will appear like magic.

 

Step 3: add jQuery, and other .js script libraries etc...

(Be aware, there are limitations.  Don't expect a full framework to be able to be included.)

jQuery is fine. I like to put my own jQuery, popper, and tooltip .js files into the project, although I think OPA already comes with some of this. 

 

 

Step 4: Get the organization's color palette from your design team and apply it to the "styles"

If your design team is not picky, and you are not a design guy, go to a site like Color Safe - accessible web color combinations or like https://flatuicolors.com/ and pick colors that look good together.  Make a "best guess" and give yourself a basic screen.  Really, I just changed the navigation bar and called it a day.

 

Note: picking colors that are WCAG compliant is really frustrating, but think of how frustrating it is to have trouble seeing...  Keep at it and never let the colors be anything but WCAG compliant. Hopefully you can get a palette from your design team that already covers WCAG.

 

Step 5: Figure out a "strategy" to style individual OPA controls, add your own .js and ask presentation guys for a .css file.

 

Strategy??? One possible "strategy" is to use control "properties" to identify a className for any "special control."  See Oracle Policy Automation Documentation Library Once a className is identified, the presentation guru's have a field day with CSS!

 

The presentation team and the policy team agree on properties that the controls should have for certain behaviors.  For example, here are custom properties on a label that gives a className of interview if my javascript is included.

 

Here is an example javascript to add classNames to my controls for CSS usage:  Note that I use a broad brush and make it possible to add a class to basically any control.  I do wonder if there is a more concise way to write this in javascript, but this works.

 

Eventually the javascript and CSS files should be turned over to the proper teams after they understand what is happening.  Policy folk shouldn't worry about javascript / CSS in my opinion.

 

//

// First, I make it possible to use CSS styling on any basic type of control

// Just add 2 properties to the control in the OPA interview:

// type : CSS

// className : <<class name to use in the CSS>>

//

OraclePolicyAutomation.AddExtension({

   style: {

        question: function(control, interview){

             if (control.getProperty("type") == "CSS" ) {

                  return { className: control.getProperty("className") }

            }

         },

       label: function(control, interview){

             if (control.getProperty("type") == "CSS" ) {

                  return { className: control.getProperty("className") }

            }

        },

        textInput: function(control, interview){

             if (control.getProperty("type") == "CSS" ) {

                  return { className: control.getProperty("className") }

            }

        },

        textAreaInput: function(control, interview){

             if (control.getProperty("type") == "CSS" ) {

                  return { className: control.getProperty("className") }

            }

        },

        calendarInput: function(control, interview){

             if (control.getProperty("type") == "CSS" ) {

                  return { className: control.getProperty("className") }

             }

        },

        dropDownInput: function(control, interview){

             if (control.getProperty("type") == "CSS" ) {

                  return { className: control.getProperty("className") }

            }

        },

        filterDropDownInput: function(control, interview){

             if (control.getProperty("type") == "CSS" ) {

                  return { className: control.getProperty("className") }

            }

        },

        listInput: function(control, interview){

             if (control.getProperty("type") == "CSS" ) {

                  return { className: control.getProperty("className") }

            }

        },

        radioInput: function(control, interview){

             if (control.getProperty("type") == "CSS" ) {

                  return { className: control.getProperty("className") }

            }

        },

        checkboxInput: function(control, interview){

             if (control.getProperty("type") == "CSS" ) {

                  return { className: control.getProperty("className") }

            }

        },

        captchaInput: function(control, interview){

             if (control.getProperty("type") == "CSS" ) {

                  return { className: control.getProperty("className") }

            }

        },

        signatureInput: function(control, interview){

             if (control.getProperty("type") == "CSS" ) {

                  return { className: control.getProperty("className") }

            }

       }

  }

});

 

 

Step 6: Add any known necessary custom controls to a project.

I "cheat".  I add ability to turn a label into an html object and vice-versa early in my projects, even if it doesn't get used.  That provides flexibility.  Oracle, please don't crucify me...  As such, I can display videos inline, pdf files, get around the html restrictions, etc...

 

After our designers / developers get a look at my code, they can then modify it accordingly.

 

These are all examples.  I will let you guys mull over the examples and change them as you see fit.

 

My first custom control example is a label that when clicked can hide other labels of a specific class on the interview screen.  Why not use OPA hide/show functionality?  You could use OPA and I recommend OPA logic as a default.  In this case, we have screen "instruction text" that really is not part of an interview but is presented in one or more labels.  The help text is not used for any other interview channel and the help text contains html formatting, etc.  In other words, the help text is not part of the interview proper and we just want to be able to show/hide labels with the click of a label for instructional purposes on the screen.  tooltip.js is also good for this.

 

//

 

// Second, I need some way to show/hide sections...

// Just add 3 properties to a label in the OPA interview:

// type : Reveal

// target : <<class name of whatever we want to show/hide>>

// className : <<class name to use in the CSS>>

//

OraclePolicyAutomation.AddExtension({

   customLabel: function(control, interview) {

     if (control.getProperty("type") == "Reveal" ) {

       return {

         mount : function(el){

           var x = document.createElement("H2");

           var t = document.createTextNode(control.getCaption());

           x.className = control.getProperty("className");

           x.onclick = function() {

             $( "." + control.getProperty("target") ).toggle("slow"); 

           };

           x.appendChild(t);

           $(function(){

             el.appendChild(x);

             $("." + control.getProperty("className")).parent().parent().next('.opa-container-vertical').addClass(control.getProperty("target")); 

           });

         }

       }

     }

  }

});


//

// Third, I need some way to show pdfs, word docs and video on screen...

// Just add 4-5 properties to a label in the OPA interview:

// type : Object

// data : <<url of the media>>

// className : <<class name to use in the CSS>>

// localResource : true | false

// mediaType : <<IANA Media Type>>

//

OraclePolicyAutomation.AddExtension({

   customLabel: function(control, interview) {

     if (control.getProperty("type") == "Object" ) {

       return {

         mount : function(el){

           var x = document.createElement("OBJECT");

           var tx = document.createTextNode("alt : ");

           var a = document.createElement("A");

           var ta = document.createTextNode(control.getCaption());

           x.className = control.getProperty("className");

           if (control.getProperty("localResource") == "true") {

             x.data = "${resources-root}" + control.getProperty("data");

           } else {

             x.data = control.getProperty("data");

           };

           a.href = control.getProperty("data");

           if (control.getProperty("mediaType")) {

             x.type = control.getProperty("mediaType")

           };

           a.appendChild(ta);

           x.appendChild(a);

           x.appendChild(tx);

           el.appendChild(x);

         }

       }

     }

  }

});


//

// Fourth, I need some way to manage images, funny enough...

// Just add 3-5 properties to a label in the OPA interview: (height and width both optional)

// type : imgCSS

// src : <<the source link>>

// localResource : true | false

// alt : <<alt text for image>>

// className : <<class name to use in the CSS>>

// width : <<optional pixels>>

// height : <<optional pixels>>

//

OraclePolicyAutomation.AddExtension({

   customLabel: function(control, interview) {

     if (control.getProperty("type") == "imgCSS" ) {

       return {

         mount : function(el){

           var x = document.createElement("IMG");

           x.className = control.getProperty("className");

           if (control.getProperty("width")) {

             x.width = control.getProperty("width");

           };

           if (control.getProperty("height")) {

             x.height = control.getProperty("height");

           };

           x.alt = control.getProperty("alt");

           if (control.getProperty("localResource") == "true") {

             x.src = "${resources-root}" + control.getProperty("src");

           } else {

             x.src = control.getProperty("src");

          }

         el.appendChild(x);

         }

       }

     }

  }

});


//

// Fifth, I need some way to show html that has been embedded inside of an OPA attribute...

// Oracle will not like this.  It gets around their general html restrictions...

// Just add 1-2 properties to a label in the OPA interview:

// type : Attribute

// className : <<optional class name to use in the CSS>>

//

OraclePolicyAutomation.AddExtension({

   customLabel: function(control, interview) {

     if (control.getProperty("type") == "Attribute" ) {

       return {

         mount : function(el){

           var x = document.createElement("SPAN");

           let fragmentFromString = function (strHTML) {

             return document.createRange().createContextualFragment(strHTML);

           }

           let s = fragmentFromString(htmlDecode(control.getCaption()));

           if (control.getProperty("className")) {

             x.className = control.getProperty("className");

           }

           x.appendChild(s)

           el.appendChild(x);

         }

       }

     }

  }

});


// Helper functions to decode and encode html as provided by OPA

//

function htmlDecode(value) {

   return $("<textarea/>").html(value).text();

}

function htmlEncode(value) {

   return $('<textarea/>').text(value).html();

}

 

 

Step 7: Add every control / option to a project and let the designers mull it out and test the results.  The designers are involved from Day 1 on any project I work with.

 

This may confuse some of you, but the idea is actually simple.  I put every combination of control onto interview screens, so we could see how the styling "worked".

The menus can be examined, the entities, relating entities, every type of input and output is on interview screens for UX examination and alteration.

 

The net result is that someone should go through every screen of the playground interview and approve of the styling.  If that is done, 90% of a production project based on (or including) this project for styling should style correctly.

 

Final, after the styling is done, you can include this project as a base project for others.  You can delete all the screens, rules, and attributes, and keep the rest.  Feel free to ask questions or give suggestions in comments.

 

Again, I may update this blog post with a newer / better version in the future.

 

Enough Talk... sample style playground project is included. 

Attached is an updated map.  See the OPA Architecture Map below for more information.

 

OPA Map 1.1.jpg

Paul Fowler

OPA on Docker

Posted by Paul Fowler Mar 19, 2018

I want to give a shout-out to Brandon Belcher who posted the following in OPA Forums:

 

OPA on Docker

 

From Brandon:

 

I was reviewing some work I started a while back containerizing OPA, and decided it was doing little good sitting in a dusty corner of my hard drive.  Hopefully someone finds these repos useful!

 

Docker OPA Hub

Weblogic Configuration

This runs two containers from official images. It installs OPA after the containers have started. It is useful for testing the OPA installer on Oracle supported deployments (Oracle JRE, Weblogic, and MySQL).

 

Tomcat Configuration

This is a completely unsupported configuration (OpenJDK, Tomcat, and MySQL) but is much faster since it builds streamlined images with OPA pre-installed. This can be useful for developers who want a local Hub install but with a smaller footprint than a VM.

 

Vagrant OPA Hub

And finally if you want a quick and easy OPA Hub but prefer virtual machines, here's a minimal-effort script for creating an OPA Hub on VirtualBox.  This, like the Tomcat Docker config, is not supported, but is quick and easy for dev use.

 

Feedback welcome.  Enjoy!

 

-Brandon

Disclaimers: The following is my opinion and I am open to revision.

 

Why not always use AI by itself?

I get this question asked on occasion.  Some bright person will ask "why not have AI learn the legacy system, learn the rules, and take it from there?"

 

  1. Primarily - AI decisions are not easily auditable. (There are ways, but it isn’t a sweet spot.)
  2. AI provides a probabilistic capability not always in conformance to law or intent.
    • AI, for instance may unknowingly violate discrimination laws.
      • AI doesn’t care that federal law may protect from discrimination based on age, race/color, national origin, creed, sex, disability, and predisposing genetic characteristics.
      • AI doesn’t care that state law may protect from discrimination based on sexual orientation, military status, pregnancy-related condition, predisposing genetic characteristics, familial status, marital status, domestic violence victim status, prior arrest record, previous conviction record, and gender identity.
    • If base data shows a bias, AI may continue and may even promote that bias going forward.
  3. AI isn’t programmed, it is “trained” making exact modifications more challenging.  (There are ways, but it isn’t the sweet spot of AI.)

 

Why not always use OPA by itself?

  1. OPA may not adapt quickly to real-time changes in environment that may impact a determination.  (There are ways, but it isn’t a sweet spot.)
  2. OPA doesn’t have in-built capabilities for real-time policy impact analyze through large ever-changing datasets.
  3. OPA’s what-if capability is based on having the rules already defined and making changes to existing rules.  Sometimes the rules are not known up-front.

 

AI and OPA together?

  1. Together they can be a good fit for events and big data problems combined with policy.
  2. Both are trying to make determinations.
    • OPA has traditionally had weaknesses with real-time event and probabilistic inputs to its inferrencing - which AI is good at.
    • Other rules engines, such as drools, etc, historically a better fit for "event driven" rules - AI can fill this gap for OPA.
    • OPA and chatbots don't easily integrate out of the box - AI and chatbots are made for each other.
    • OPA won't analyze photographs, voice, video, files, etc...  - AI will analyze binary evidence itself.
    • OPA doesn't mine data for rules and correlation - AI can mine data and apps.
    • AI has issues with inferring results from policy and/or human intent - OPA is great at inferrence from policy and human intent.
    • AI by itself doesn't always put a "human in the loop" for assistance and is not easily understood by business users - OPA is easily understood by business users.
    • AI conversations are not always meaningful - OPA conversations are meaningful.
  3. There appears to be a symbiotic relationship, where each products weaknesses are fixed by the other products strengths.

 

Neither product takes long to set up.  This isn’t 3+ months of infrastructure deployment if the cloud is utilized…  I assert that the hardest part is getting access to the big data and source policy, not integrating or using the tools.

 

Some currently viable OPA / AI Scenarios

Case 1 – Legacy Rule Discovery

Case 2 – Anomaly detection as part of an interview

Case 3 – Decisions Augmented by Real-Time Big Data Event Analysis

  •   Classifications and Predictions
  •   Feature Extraction
  •   Diagnosis and Troubleshooting

 

(I was going to add a 4th case on AI chatbot with OPA to provide human in the loop, but the architecture is evolving too quickly right now.  I may come back to that in the future.)

 

Case 1 – Legacy Rule Discovery

Assertions:

  • Optimal business determinations take input from at least two groups:
    • Policy / Legislative Analysts
    • Data Scientists
  • OPA / AI provides an analysis intersection for these two groups.
  • “Legacy” can mean migration from a historical system OR a legacy can be a running system where rules need optimization…
  • It is important to understand that data can create its own rules in a running system and these rules may need to be exploited by the business for gain.

Case 1 process:

 

This case 1 is predicated on having legacy data or system and some knowledge of the existing system’s inputs / outputs…

  • First pass uses OPA to implement rules as “expected”.
  • A process using AI follows to refine the results:
    • OPA what-if analysis is used to determine OPA result accuracy from existing data.
    • AI finds correlations and rules to the mismatched results from the existing data.
      • AI determines base data that impacted the outcomes
      • AI determines relationships of base data to outcomes
    • OPA rules are revised with new findings from AI as verified by the business.
  • Process is repeated until OPA rule accuracy at acceptable levels.

 

Note: If the legacy system is available, AI can be utilized to run the legacy system repeatedly and more accurately assist with base data impact and correlation.

 

Final note on Case 1.  In my experience, usually the issue needing both OPA / AI arises when a legacy system is being decommissioned by a different organization than the organization now responsible for the business application.  Replacement of 30+ year old legacy systems may also rise to this level of need.

 

Case 2 – Anomaly detection as part of an interview

Assertions:

  • Optimal business determinations will detect anomalies early.
  • AI provides a near real-time anomaly determination based on examining the data.
  • Dealing with anomalies is a policy question and therefore a good fit for OPA.

 

OPA/AA Intersection is in providing score data from analytics to OPA

 

Case 2 is predicated on having OPA persist attributes to big data store utilized by AI.

  1. OPA provides new attributes to AI system that looks for anomalies.
  2. If AI finds anomalies, such as irregular payment, irregular household compositions, etc, OPA is notified via setting of a base attribute from the AI system.
  3. OPA can either notify the interviewee for verification, notify the interviewing system for follow-up, or simply record this result in an audit field for later analysis.  The rules on what to do with anomalies are OPA human-driven rules.

 

Case 3 – Decisions Augmented by Real-Time Big Data Event Analysis

Assertions:

  • AI can classify / predict probable outcomes when OPA collects incomplete data and feed that back to OPA.  This includes predicting additional programs the client may be interested in.
  • AI can extract features to enhance interviews. For instance AI can evaluate images and large text provided in an interview.
  • AI can diagnose data provided in an interview. AI can help troubleshoot. (Possibly with chatbots.)

 

This is a simpler process than it looks.

  1. AI is put into a real-time feedback loop with OPA.
  2. AI draws from a big-data store to make determinations about current OPA data.
  3. After OPA is done, the OPA data is added to the big data store.

 

What are the needed OPA / AI integrations for the above cases?

Currently, the best / strongest methods I have found are machine learning tools that read database data and write that data back to the database.

It is then OPA's responsibility to provide input to AI through a database, or to query the database for the results.

This type of integration isn't very hard, but it also isn't "business user friendly".  The integrations need to be more obvious during business and data analysis.

 

One of the easiest integrations of AI and OPA is in big data and statistics / probabilities.

I put a related OPA probabilities blog post here:  Logic Puzzle - OPA Determination with Probability

The statistics in the probability blog post example were typed into Excel.  But, a much better source for the statistics would be an AI engine.  That would allow real-time statistics to be used with OPA policy to adjust business outcomes.

 

In the future, we will know that OPA / AI has been well integrated when I can zip up an OPA file that demonstrates both together.  At the moment, I can't zip up the AI material.  (Another reason AI is not prime without OPA.)

 

I hope for more research into the following:

  • Stronger Service Cloud / Siebel integration with AI.
  • Generic OPA connectors to databases
    • Assertion: AI tools built to retrieve and provide data to databases.
    • Assertion: Shared data is current best mechanism to interact with AI
    • Examples (my current preferences):
      • A solution such as Mantis should be utilized or built into OPA by Oracle
      • OPA should have a solution to dynamically create a data store for attributes tagged by modeler.
  • AI integration during policy modeling by the analysts.

 

My final hope is the above blog post creates some discussion / thinking in the community.  I assert that AI and OPA are an excellent fit to combine machine and human determinations.

 

Comment if you have additional thoughts.

A probability problem comes up in policy that needs frequent solving:

 

Do we immediately send police to a residence of suspected activity based on probability the situation will escalate?  Will a person likely skip bail? How likely is a benefit to be applicable after only a preliminary screening?

 

The challenge is that sometimes probability is involved.  We can't immediately send police to every event all the time.  We must sometimes be selective and our policy needs to allow for policy probability in some parts of our determinations.

 

Puzzle #5

 

The local department of criminal justice has provided the following statistics in determining the person’s probability of missing court (these statistics are made-up).  The overall probably of missing a court appointment is 45%.  Of those who miss court, 40% are in the local community, 80% had an outstanding warrant, and 10% had a job.

 

Allowing for uncertainty and collecting 1) whether a person is a member of the community, 2) has an outstanding warrant, and/or 3) has as job, write an OPA policy that implements the following:

As any FYI, my collection screen looks like this:

 

 

The solution to this problem is attached with a more "generic" structure for solving problems of "likelihood".

 

The solution is simple in that a one page word document sets up the math.  A spreadsheet contains the list of question attributes used in screens to refine the final probability.  Questions which refine the probability can be added into the spreadsheet.  You can have 2-3 questions such as in this puzzle, or you can have several hundred.

 

A theoretical introduction to the solution can be found here: https://www.askiitians.com/iit-jee-algebra/probability/bayes-theorem.aspx

There are also many good youtube videos on Bayes reasoning: https://www.google.com/search?q=youtube+bayes+theorem&oq=youtube+bayes+theorem&aqs=chrome..69i57j69i64.3865j0j7&sourceid…

 

I will have a future blog post where I start to discuss intersection with AI and specifically with Machine Learning.

Paul Fowler

OPA Architecture Map 1.0

Posted by Paul Fowler Jan 10, 2018

Since this is an architecture blog, I should post some architecture assistance, huh?

 

I attached a PDF with embedded links.  This OPA architecture map is meant to only be a starting point for architects.

 

Every element in the attached documents has URL links to documentation so that the elements can be studied.

 

I figure an hour with this map, and most experienced IT architects will have the logical OPA technical architecture concepts understood.

 

Disclaimers:

  • I don't work for Oracle and this is not authoritative.
  • I recognize the architecture view is not a standard architectural view by any stretch.
  • Components and Interactions shown are not necessarily everything available.
  • This map was created in a morning, so feel free to add comments to this blog about mistakes I made.

 

OPA Map 1.0.jpg

Today's post is a bit unique and should leave some OPA developers at Oracle scratching their heads and asking "why?"  My answer - "why not?"

 

Have you ever wanted to select questions at random to ask during an OPA interview, in order to test knowledge?  Can you think of any other reason to have OPA generate Pseudo-Random numbers?  I did - I quickly wrote a game using OPA.

 

Sure, you could use Rulescript, an API, or Javascript to get random numbers, but I chose to implement the common Linear Congruent Generator for random numbers using OPA itself.  I use the same routine and parameters as the Java libs, Posix, etc...  So, I think this is a pretty good random number generator.

 

My "Random Numbers" project is attached and also the classic game of "Bagels" from the 1978 issue of ATARI magazine.   I implemented the game using OPA and the random number generator so that example random number generator usage can be explored.  Plus - its a game.

 

The random numbers project from this blog is easily included in other projects as an "inclusion". 

 

There is a "Random Number Initialization.docx" rule file that has the following parameter settings for the random number generator.  These example settings would generate 100 random values between 1-10,000 inclusive.  It puts the values into "the random number value" in the random number entity.  You are expected to change these settings for your project.  The other rule file contains my random number generator logic and shouldn't be touched unless you know what you are doing. 

 

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Random number settings.  Separate from the rules, in case I want to change the random number engine…

 

These next 4 values can be overridden to provide more / less random numbers and range… the rndm seed defaults to a value from the date / time.

the seed = the rndm time seed

the rndm number count = 100

the rndm range low = 1

the rndm range high = 10000

 

if the following statement is "the random values are unique", then the rndm number count must be less than the random number range.  This generates non-repeating numbers.

 

This next statement must be either:

  • the random values are unique
  • the random values are not unique

the random values are unique

Disclaimer: This next puzzle wasn't put up in the Oracle Forums, because it doesn't really show off OPA.  Indeed, the puzzle takes away strong OPA capabilities and then tries to see if people can still use OPA to solve the problem.   On the bright side, this is still a puzzle for members of the OPA community that like to solve puzzles in their free time. (Yes, some of us enjoy keeping up our skills that way.)

 

The broken keyboard puzzle. I started to invent a story, but the story was too juvenile.  So instead, here is the puzzle:

 

attr d = (attr a + attr b + 5) * attr c

 

That is a very simple rule, as long as you write it with numbers and symbols such as 5, =, (, +, *, etc…

 

To solve this puzzle, write the equivalent of that very simple rule in OPA without using any number or symbol characters such as 0-9,=,+,*,(,),[,],-,”,’, etc…  You can’t use “5”, can’t use “=”, can’t use “(“ or “)”, can’t use “+” or “*”… and so on…

 

Inputs attr a, attr b, and attr c are guaranteed to all be integers greater than or equal to 1 and less than or equal to 100.

 

Trust me, that last guarantee simplifies the puzzle and adds a few more solution options…  However, this still isn’t an obvious puzzle to solve. 

 

A solution project is attached - it has some OPA tricks in it.

This puzzle is an intermediate to advanced puzzle.  It isn't long, but it takes a lot of thought.  Stephen French was the first to solve it in the Oracle Community.  Kudos.

 

 

Puzzle solving requirement:  OPA must solve the puzzle immediately when debug is pressed.  When Debug is hit, using rules, OPA must decide the correct solution.  In other words, OPA must be fed conditions to come up with the correct solution, OPA cannot be directly fed a single correct answer, except of course feeding OPA data and rules based on the puzzle itself.  [We know the names of the three contestants, so it is o.k. to provide OPA those names.]

 

How would you write rules to have OPA solve the following puzzle immediately when Debug is pressed:

 

Isaac and Albert were excitedly describing the result of the Third Annual International Science Fair Extravaganza in Sweden. There were three contestants, Louis, Rene, and Johannes. Isaac reported that Louis won the fair, while Rene came in second. Albert, on the other hand, reported that Johannes won the fair, while Louis came in second.

 

In fact, neither Isaac nor Albert had given a correct report of the results of the science fair. Each of them had given one correct statement and one false statement. What was the actual placing of the three contestants?

 

Care to try using rules in OPA to determine the correct placing of the three contestants?

 

A project with the solution is attached.

 

The OPA debug screen should look similar to this for a solution:

Guidelines have been updated 12/18/2017 based on a lot of feedback.

 

All the disclaimers still apply.  These are sample guidelines used by a real organization.  In most instances, these are meant to be served as template to be modified.  See the attached document.

 

Appendix F: Checklist for Compliance

The following checklist is use for compliance to this guideline.  The only requirement for compliance is that all the mandatory requirements be met.  Scoring is utilized as a quality assessment to measure maturity of an OPA implementation.

Scoring is as follow:

0 = Not in use

1 = Partially available and/or partially used by the project

2 = Available and in-use by the project

Quality Check

Analysis

OPA is being used for rule lifecycle management

T / F

Overarching policy outcomes are defined in OPA

T / F

Substantive OPA Policy Rules are reviewed by a lawyer and/or agency policy analyst

T / F

OPA is being used to assist in mining rules from source policy and/or legislation

T / F

The OPA project has separate roles for lawyer / policy analyst, OPA policy modeler, OPA interview designer, and OPA technical integrator / developer

0 / 1 / 2

OPA is being used for rule discovery and rule verification via analysis of existing data

0 / 1 / 2

OPA is being used to discover attributes needed by an application in determining outcomes

0 / 1 / 2

OPA is being used for impact analysis of rules

0 / 1 / 2

OPA substantive rules are primarily used to determine outcomes defined by Agency policy and/or legislation

T / F

OPA substantive rules need visibility by the business

T / F

OPA provides decision reports

T / F

OPA rules provide "temporal reasoning"

0 / 1 / 2

Substantive, procedural, and visibility rules are not combined

T / F

Traceability is provided from all substantive rules to source material

T / F

Substantive rules are in Natural Language

T / F

Rules are written to be read by non-OPA analysts

0 / 1 / 2

Production rules documents only contain operational rules

T / F

All OPA rulesets have a design document

T / F

OPA rules within a document are "on topic"

0 / 1 / 2

OPA only receives data originating from the rule consumer

0 / 1 / 2

OPA should determine outcomes for "I don't know" inferences

0 / 1 / 2

All Microsoft Word rule documents must have a TOC (Table of Contents)

T / F

Booleans attributes are never conclusions in word tables

T / F

Rules should not go deeper than level 3

0 / 1 / 2

Excel is used when source material is in a table, to implement rate tables, or there are multiple conclusions from the same conditions

0 / 1 / 2

All attributes must be properly parsable and parsed by OPA

T / F

Production projects can be debugged via the OPA debugger

T / F

Projects redefine "the current date"

T / F

All substantive policy conclusions have unit test cases

T / F

Projects have regression test suites

T / F

Projects plan OPA upgrades once per quarter

T / F

List Items are turned into boolean attributes before using them as conditions

0 / 1 / 2

An ability to regression test with production data has been implemented

0 / 1 / 2

An OPA quality checklist is utilized

T / F

Public names are created for integration / mapping with other applications

0 / 1 / 2

Public names follow a naming guideline

T / F

Entity's identifying attributes are provided

T / F

Entities and relationships are only created when the rules require them

0 / 1 / 2

Rule text should follow Oracle guidelines for entities, relationships, and attributes

0 / 1 / 2

Design and rule documents should contain description of relevant entities and relationships

0 / 1 / 2

Data saved from OPA can be re-loaded into OPA

0 / 1 / 2

Only the initial rules to determine an outcome should avoid effective dates via temporal logic

0 / 1 / 2

Rate tables should be temporal in Excel

0 / 1 / 2

Rules should not be deleted after they are used in production

0 / 1 / 2

Interviews are created with Accessibility warning level of WCAG 2.0 AA or better

T / F

Interviews have goals that support relevance of collected attributes

T / F

All policy determinations are available as web services

T / F

OPA "Relevancy" is used for all screens and attribute collection

0 / 1 / 2

Policy determination rules are developed prior to developing interview screens

0 / 1 / 2

All entities, personal attributes, headings, and labels have names substitution

0 / 1 / 2

Attribute text should not be changed on screens

0 / 1 / 2

The small feedback I received indicates that people did not realize there are word docs and sometimes projects attached to these posts...

 

For instance, here is the content from the Shortest Interview Guidelines.  Comments are especially desired to improve Shortest Interview Guideline 7.

 

I would post the content from the quality guidelines, but it is really too long...

 

Intermediate Oracle Policy Automation – OPA Shortest Interview Guidelines

 

As an aid to quality, what I call the “OPA Shortest Interview Guidelines” help ensure that the shortest length of interview successfully obtains a primary determination.

 

Intent

  • Minimize the number of interview questions to get to a determination.
  • Minimize content that must be read to get to a determination.

Problem

You need to minimize the number of questions asked of a client. The result must be consistent across channels.

Interview questions have different levels of relevancy depending on the client.

Discussion

OPA should handle the complexity of determining question relevancy, and this is a first condition toward developing the shortest interview.

OPA's definition of relevancy is as follows:

1. Rule 1: An attribute's value is relevant if changing it could cause the conclusion of the rule to change.

2. Rule 2: Where a set of values are not relevant individually (through Rule 1) but are equally responsible for the value of the conclusion, then all values in the set are considered relevant.

3. Rule 3: All values that could be relevant if unknown values became known, are considered relevant.

 

Although elimination of irrelevant interview questions is critical to brevity, relevancy is not always true or false. For example, the use of limits in conjunction with greater than or less than comparisons can cause OPA to consider that an irrelevant attribute is relevant.


Some questions are often answered the same by all populations. If 95% of all interviewees are in-state, then asking for state residence up front may be less expeditious than asking about income.


Another technique for abbreviation is to combine questions to shorten an interview. Instead of asking three questions, an interview might combine them: "Are you over the age of 65, disabled, or blind?".


Shortest interview requires that the Oracle’s Rule principles be followed:

1. Each conclusion must be stated only once.

2. Each rule must have a comprehensive statement of conditions.

3. Each component of the rule must be clearly identifiable.

4. Each condition must itself be logically complete to determine the value of the condition.

5. Every rule must be knowable.

6. The order in which information is presented should not change the outcome of the rules.

 

Because questions can have different impacts based on the user, the following guidelines have emerged as an aid to arrange base attributes and questions for the shortest interview.

General Guideline

OPA interviews should follow Oracle’s whitepaper "Oracle Policy Automation Best Practice Guide for Policy Modelers", which provides guidelines on interview clarity which shortens the time to get to a primary determination.


The next guidelines augment the general guideline. They are specific to achieving the shortest average interview to get to a primary determination. These next guidelines may not be appropriate for other goals.


Shortest Interview Guideline 1

There should be a single top-level interview goal for the primary determination. No other goals should be defined in the interview. Having more than one top-level goal may cause OPA relevancy to ask additional questions that are not relevant to a primary determination.

Negative Example:

Suppose "the guideline 1 first goal is met" provides the primary determination…

In the above example, "the guideline 1 third condition is met" is an additional input asked that is not relevant to the primary determination. It will be asked and lengthen the interview… In this simple case the interview has been lengthened by 50%.

  

Shortest Interview Guideline 2

Until the primary determination is made, all attributes collected in the interview should be conditions relevant to the single top-level interview goal or should provide many default values to conditions for the single top-level goal.

- An interview asking questions that do not determine the primary goal will probably not create the shortest interview.

Negative Example:

Suppose "the guideline 2 goal is met" provides the primary determination…

If the collection screen is as follows, then Name and Address are not required. Interviewees may not want to give this information until they know whether they are "eligible" based on some determination.

 

Shortest Interview Guideline 3

Attributes where base data is not going to be kept or otherwise queried should be combined.

Turning 2 or 3 questions into 1 shorter question usually shortens an interview.

Example:

In the example rule above, notice the conclusion can be inferred by asking only one question instead of three questions.

 

Shortest Interview Guideline 4

Screens, booleans, and containers should show if "control collects relevant information".

This allows OPA to determine relevancy. We try to keep visibility rules (extra rule writing) to a minimum. If OPA can determine what to ask on its own, it saves both work and shortens interviews. [Note, as of November 2017, If two attributes can be linked by a shortcut rule, then they should generally not be collected on the same screen.]

Example:

For each question, set "Show if…"

Then, the questions will only show when needed, shortening the interview as such… In this case, answering yes to the first question removes the need for the second question.

Shortest Interview Guideline 5

Create any possible shortcut rules.

By definition, these shorten interviews. See "Capture implicit logic in rules" in the OPA help.

Example:

Note: Shortcut rules can be replaced with "DefaultWithUnknown()" rules in the latest versions of OPA. The primary reason for using a shortcut rule would appear to be to maintain natural language syntax. The verdict is still out whether shortcut rules, interview default values, or default functions are better; due to the warning on DefaultWithUnknown(): This function should be used with caution, since additional data can cause decisions to change. Default functions may provide more default consistency across channels and earlier determinations.

Shortest Interview Guideline 6

Every question should default to the most likely value or provide hint text.

- A question already answered provides for a shorter interview.

No example required.

Note: When creating defaults, several options are available in OPA (with trade-offs.) The verdict is still out regarding whether shortcut rules, interview default values, or default functions are better; due to the warning on DefaultWithUnknown(): This function should be used with caution, since additional data can cause decisions to change. Default functions may provide more default consistency across channels and earlier determinations. Note, a dynamic default can be updated live by evaluating a rule that uses data on the same screen.

 

Shortest Interview Guideline 7

The base attributes most responsible for the value of the top-level interview goal should be collected first.

The sooner the top-level goal is known, the fewer questions need to be asked, so the shorter the interview. This includes asking questions that help default future answers. It is sometimes necessary to ask a question whose sole purpose is to provide defaults for many other base attributes.

 

Questions most responsible for the value of the top-level goal can generally be identified as follows:

1.) The base attribute is the "fewest" levels deep.

2.) Other base attribute default values or their visibility depends upon the question

3.) If the base attribute conjunction is "OR", the answer distribution is expected to be most commonly answered in the positive.

4.) If the base attribute conjunction is "AND", the answer distribution is expected to be most commonly answered in the negative.

5.) The question is mandatory (cannot be left unknown or uncertain).

6.) The question is a boolean.

 

Shortest Interview Guideline 8

Screens / questions of interest to the business but not the determination should be put after the questions for the primary determination.

As a rule, the business may have further information to collect depending on the determination made. For shortest interviews, this information is best put in screens after the determination. Contact information such as phone numbers and mailing addresses are examples that are asked last to shorten an interview. These attributes are generally free-form text.

 

Example:

Register new users and collect their billing / mailing addresses after they have been vetted by OPA.

 

Exception:

There is an obvious exception to this guideline. Collecting entity identifiers and sex to aid in asking unambiguous questions may shorten an interview per advice from the General Guideline.

 

Shortest Interview Guideline 9

Projects should pay additional attention to the time spent per screen and interview duration charts available on the OPA hub after the August 2017 release of OPA. Use this data periodically to revise attribute collection.

 

A reasonable approach may be to monitor actual interviews, analyse results and keep trying changes that might lower the averages (perhaps within a time/cost limit). This process could be accelerated by using past data as tests against new interview tweaks. However, that approach won't guarantee an improvement unless the next set of data happens to be identical to the analysed set of data. In short, the best that can be reasonably achieved is to test certain assumptions and measure actual experience. As Matt Sevin, from Oracle says: “Past performance does not guarantee future performance, nor do the assumptions that seem to improve one interview necessarily imply similar results in another policy model.”

 

Example:

 

 

Shortest Interview Guideline 10

Maximize use of "hide" for all controls.

 

The less text that a user must read, the shorter the interview.

 

Negative examples:

Check that all parent controls do not have these settings:

Shortest Interview Guideline 11

Restrict answers (avoid non-granular answers). Rearrange attributes so that earlier attributes can restrict later attributes.

 

Reduce the granularity and abundance of answers and specifically avoid free-form answers. In general, have shorter interviews by asking fewer questions and minimizing the quantity of possible answers. While this may appear obvious, many business users forget that while interesting, detail is not always necessary. This is a primary reason why booleans are preferred. In many cases, numeric attributes can be converted to boolean by gaining knowledge from prior attributes.

 

Examples:

Assume a determination is dependent upon whether a client lives in NY. Instead of asking for State of residence and then checking if the State is NY, ask only whether the client resides in NY (true/false).

Assume financial aid is available for students who make less than 20,000 a year if the student is over the age of 25. Don't ask the student's specific salary ranges or specific age, ask if the student is over 25 (true/false), then ask if the student makes less than 20,000 a year (true/false).

 

Shortest Interview Guideline 12

Provide dynamic default number of entity instances.

 

Use initial attributes to dynamically determine the number of required entity instances. Entity instances require more time and thought by the end user. Avoid having the end user specifically think about creation / deletion of entities.

 

Shortest Interview Checklist

Use the following checklist for guideline compliance. Scoring is utilized as a quality assessment to measure maturity of an OPA implementation for shortest interview.

Scoring is as follow:

0 = Not in use

1 = Partially available and/or partially used by the project

2 = Available and in-use by the project

Quality Check

Analysis

OPA interviews follow the whitepaper "Oracle Policy Automation Best Practice Guide for Policy Modelers" provided by Oracle.

0 / 1 / 2

There is a single top-level interview goal for the primary determination. No other goals are defined in the interview.

0 / 1 / 2

Until the primary determination is made, all attributes collected in the interview are conditions relevant to the single top-level interview goal or provide many default values to conditions for the single top-level goal.

0 / 1 / 2

Attributes where base data is not going to be kept or otherwise queried are combined.

0 / 1 / 2

Screens, booleans, and containers should show if "control collects relevant information".

0 / 1 / 2

Any possible shortcut rules have been created.

0 / 1 / 2

Every question defaults to the most likely value or provides hint text.

0 / 1 / 2

The base attributes most responsible for the value of the top-level interview goals are collected first.

0 / 1 / 2

Screens / questions of interest to the business but not the determination are put after the questions for the primary determination.

0 / 1 / 2

Projects pay additional attention to the time spent per screen and interview duration charts available on the OPA hub after the August 2017 release of OPA. This data is used periodically to revise attribute collection.

0 / 1 / 2

Use is maximized for "hide" for all controls.

0 / 1 / 2

Answers are restricted to small sets and rearranged so that earlier attributes can restrict later attributes.

0 / 1 / 2

The default number of entity instances is provided dynamically.

0 / 1 / 2

 

 


Please see all the "disclaimers" from the prior post on Quality Guidelines.  I hate repeating myself.

 

This set of shortest interview guidelines is just a draft start.  It requires community feedback.  It has been created because I noticed a pattern where clients ask how to use OPA to shorten interviews (especially screening for eligibility.) 

 

It needs work, but is a start of guidance specific to that problem.  See the attached word document.  Provide constructive critique.  Take it or leave it. 

 

It would be nice to post a better document in another month or two that contained positive community feedback.

 

Shortest Interview Checklist

Use the following checklist for guideline compliance. Scoring is utilized as a quality assessment to measure maturity of an OPA implementation for shortest interview.

Scoring is as follow:

0 = Not in use

1 = Partially available and/or partially used by the project

2 = Available and in-use by the project

Quality Check

Analysis

OPA interviews follow the whitepaper "Oracle Policy Automation Best Practice Guide for Policy Modelers" provided by Oracle.

0 / 1 / 2

There is a single top-level interview goal for the primary determination. No other goals are defined in the interview.

0 / 1 / 2

Until the primary determination is made, all attributes collected in the interview are conditions relevant to the single top-level interview goal or provide many default values to conditions for the single top-level goal.

0 / 1 / 2

Attributes where base data is not going to be kept or otherwise queried are combined.

0 / 1 / 2

Screens, booleans, and containers should show if "control collects relevant information".

0 / 1 / 2

Any possible shortcut rules have been created.

0 / 1 / 2

Every question defaults to the most likely value or provides hint text.

0 / 1 / 2

The base attributes most responsible for the value of the top-level interview goals are collected first.

0 / 1 / 2

Screens / questions of interest to the business but not the determination are put after the questions for the primary determination.

0 / 1 / 2

Projects pay additional attention to the time spent per screen and interview duration charts available on the OPA hub after the August 2017 release of OPA. This data is used periodically to revise attribute collection.

0 / 1 / 2

Use is maximized for "hide" for all controls.

0 / 1 / 2

Answers are restricted to small sets and rearranged so that earlier attributes can restrict later attributes.

0 / 1 / 2

The default number of entity instances is provided dynamically.

0 / 1 / 2