Skip navigation
ANNOUNCEMENT: community.oracle.com is currently Read only due to planned upgrade until 29-Sep-2020 9:30 AM Pacific Time. Any changes made during Read only mode will be lost and will need to be re-entered when the application is back read/write.

Disclaimer: This next puzzle wasn't put up in the Oracle Forums, because it doesn't really show off OPA.  Indeed, the puzzle takes away strong OPA capabilities and then tries to see if people can still use OPA to solve the problem.   On the bright side, this is still a puzzle for members of the OPA community that like to solve puzzles in their free time. (Yes, some of us enjoy keeping up our skills that way.)

 

The broken keyboard puzzle. I started to invent a story, but the story was too juvenile.  So instead, here is the puzzle:

 

attr d = (attr a + attr b + 5) * attr c

 

That is a very simple rule, as long as you write it with numbers and symbols such as 5, =, (, +, *, etc…

 

To solve this puzzle, write the equivalent of that very simple rule in OPA without using any number or symbol characters such as 0-9,=,+,*,(,),[,],-,”,’, etc…  You can’t use “5”, can’t use “=”, can’t use “(“ or “)”, can’t use “+” or “*”… and so on…

 

Inputs attr a, attr b, and attr c are guaranteed to all be integers greater than or equal to 1 and less than or equal to 100.

 

Trust me, that last guarantee simplifies the puzzle and adds a few more solution options…  However, this still isn’t an obvious puzzle to solve. 

 

A solution project is attached - it has some OPA tricks in it.

This puzzle is an intermediate to advanced puzzle.  It isn't long, but it takes a lot of thought.  Stephen French was the first to solve it in the Oracle Community.  Kudos.

 

 

Puzzle solving requirement:  OPA must solve the puzzle immediately when debug is pressed.  When Debug is hit, using rules, OPA must decide the correct solution.  In other words, OPA must be fed conditions to come up with the correct solution, OPA cannot be directly fed a single correct answer, except of course feeding OPA data and rules based on the puzzle itself.  [We know the names of the three contestants, so it is o.k. to provide OPA those names.]

 

How would you write rules to have OPA solve the following puzzle immediately when Debug is pressed:

 

Isaac and Albert were excitedly describing the result of the Third Annual International Science Fair Extravaganza in Sweden. There were three contestants, Louis, Rene, and Johannes. Isaac reported that Louis won the fair, while Rene came in second. Albert, on the other hand, reported that Johannes won the fair, while Louis came in second.

 

In fact, neither Isaac nor Albert had given a correct report of the results of the science fair. Each of them had given one correct statement and one false statement. What was the actual placing of the three contestants?

 

Care to try using rules in OPA to determine the correct placing of the three contestants?

 

A project with the solution is attached.

 

The OPA debug screen should look similar to this for a solution:

Guidelines have been updated 12/18/2017 based on a lot of feedback.

 

All the disclaimers still apply.  These are sample guidelines used by a real organization.  In most instances, these are meant to be served as template to be modified.  See the attached document.

 

Appendix F: Checklist for Compliance

The following checklist is use for compliance to this guideline.  The only requirement for compliance is that all the mandatory requirements be met.  Scoring is utilized as a quality assessment to measure maturity of an OPA implementation.

Scoring is as follow:

0 = Not in use

1 = Partially available and/or partially used by the project

2 = Available and in-use by the project

Quality Check

Analysis

OPA is being used for rule lifecycle management

T / F

Overarching policy outcomes are defined in OPA

T / F

Substantive OPA Policy Rules are reviewed by a lawyer and/or agency policy analyst

T / F

OPA is being used to assist in mining rules from source policy and/or legislation

T / F

The OPA project has separate roles for lawyer / policy analyst, OPA policy modeler, OPA interview designer, and OPA technical integrator / developer

0 / 1 / 2

OPA is being used for rule discovery and rule verification via analysis of existing data

0 / 1 / 2

OPA is being used to discover attributes needed by an application in determining outcomes

0 / 1 / 2

OPA is being used for impact analysis of rules

0 / 1 / 2

OPA substantive rules are primarily used to determine outcomes defined by Agency policy and/or legislation

T / F

OPA substantive rules need visibility by the business

T / F

OPA provides decision reports

T / F

OPA rules provide "temporal reasoning"

0 / 1 / 2

Substantive, procedural, and visibility rules are not combined

T / F

Traceability is provided from all substantive rules to source material

T / F

Substantive rules are in Natural Language

T / F

Rules are written to be read by non-OPA analysts

0 / 1 / 2

Production rules documents only contain operational rules

T / F

All OPA rulesets have a design document

T / F

OPA rules within a document are "on topic"

0 / 1 / 2

OPA only receives data originating from the rule consumer

0 / 1 / 2

OPA should determine outcomes for "I don't know" inferences

0 / 1 / 2

All Microsoft Word rule documents must have a TOC (Table of Contents)

T / F

Booleans attributes are never conclusions in word tables

T / F

Rules should not go deeper than level 3

0 / 1 / 2

Excel is used when source material is in a table, to implement rate tables, or there are multiple conclusions from the same conditions

0 / 1 / 2

All attributes must be properly parsable and parsed by OPA

T / F

Production projects can be debugged via the OPA debugger

T / F

Projects redefine "the current date"

T / F

All substantive policy conclusions have unit test cases

T / F

Projects have regression test suites

T / F

Projects plan OPA upgrades once per quarter

T / F

List Items are turned into boolean attributes before using them as conditions

0 / 1 / 2

An ability to regression test with production data has been implemented

0 / 1 / 2

An OPA quality checklist is utilized

T / F

Public names are created for integration / mapping with other applications

0 / 1 / 2

Public names follow a naming guideline

T / F

Entity's identifying attributes are provided

T / F

Entities and relationships are only created when the rules require them

0 / 1 / 2

Rule text should follow Oracle guidelines for entities, relationships, and attributes

0 / 1 / 2

Design and rule documents should contain description of relevant entities and relationships

0 / 1 / 2

Data saved from OPA can be re-loaded into OPA

0 / 1 / 2

Only the initial rules to determine an outcome should avoid effective dates via temporal logic

0 / 1 / 2

Rate tables should be temporal in Excel

0 / 1 / 2

Rules should not be deleted after they are used in production

0 / 1 / 2

Interviews are created with Accessibility warning level of WCAG 2.0 AA or better

T / F

Interviews have goals that support relevance of collected attributes

T / F

All policy determinations are available as web services

T / F

OPA "Relevancy" is used for all screens and attribute collection

0 / 1 / 2

Policy determination rules are developed prior to developing interview screens

0 / 1 / 2

All entities, personal attributes, headings, and labels have names substitution

0 / 1 / 2

Attribute text should not be changed on screens

0 / 1 / 2

The small feedback I received indicates that people did not realize there are word docs and sometimes projects attached to these posts...

 

For instance, here is the content from the Shortest Interview Guidelines.  Comments are especially desired to improve Shortest Interview Guideline 7.

 

I would post the content from the quality guidelines, but it is really too long...

 

Intermediate Oracle Policy Automation – OPA Shortest Interview Guidelines

 

As an aid to quality, what I call the “OPA Shortest Interview Guidelines” help ensure that the shortest length of interview successfully obtains a primary determination.

 

Intent

  • Minimize the number of interview questions to get to a determination.
  • Minimize content that must be read to get to a determination.

Problem

You need to minimize the number of questions asked of a client. The result must be consistent across channels.

Interview questions have different levels of relevancy depending on the client.

Discussion

OPA should handle the complexity of determining question relevancy, and this is a first condition toward developing the shortest interview.

OPA's definition of relevancy is as follows:

1. Rule 1: An attribute's value is relevant if changing it could cause the conclusion of the rule to change.

2. Rule 2: Where a set of values are not relevant individually (through Rule 1) but are equally responsible for the value of the conclusion, then all values in the set are considered relevant.

3. Rule 3: All values that could be relevant if unknown values became known, are considered relevant.

 

Although elimination of irrelevant interview questions is critical to brevity, relevancy is not always true or false. For example, the use of limits in conjunction with greater than or less than comparisons can cause OPA to consider that an irrelevant attribute is relevant.


Some questions are often answered the same by all populations. If 95% of all interviewees are in-state, then asking for state residence up front may be less expeditious than asking about income.


Another technique for abbreviation is to combine questions to shorten an interview. Instead of asking three questions, an interview might combine them: "Are you over the age of 65, disabled, or blind?".


Shortest interview requires that the Oracle’s Rule principles be followed:

1. Each conclusion must be stated only once.

2. Each rule must have a comprehensive statement of conditions.

3. Each component of the rule must be clearly identifiable.

4. Each condition must itself be logically complete to determine the value of the condition.

5. Every rule must be knowable.

6. The order in which information is presented should not change the outcome of the rules.

 

Because questions can have different impacts based on the user, the following guidelines have emerged as an aid to arrange base attributes and questions for the shortest interview.

General Guideline

OPA interviews should follow Oracle’s whitepaper "Oracle Policy Automation Best Practice Guide for Policy Modelers", which provides guidelines on interview clarity which shortens the time to get to a primary determination.


The next guidelines augment the general guideline. They are specific to achieving the shortest average interview to get to a primary determination. These next guidelines may not be appropriate for other goals.


Shortest Interview Guideline 1

There should be a single top-level interview goal for the primary determination. No other goals should be defined in the interview. Having more than one top-level goal may cause OPA relevancy to ask additional questions that are not relevant to a primary determination.

Negative Example:

Suppose "the guideline 1 first goal is met" provides the primary determination…

In the above example, "the guideline 1 third condition is met" is an additional input asked that is not relevant to the primary determination. It will be asked and lengthen the interview… In this simple case the interview has been lengthened by 50%.

  

Shortest Interview Guideline 2

Until the primary determination is made, all attributes collected in the interview should be conditions relevant to the single top-level interview goal or should provide many default values to conditions for the single top-level goal.

- An interview asking questions that do not determine the primary goal will probably not create the shortest interview.

Negative Example:

Suppose "the guideline 2 goal is met" provides the primary determination…

If the collection screen is as follows, then Name and Address are not required. Interviewees may not want to give this information until they know whether they are "eligible" based on some determination.

 

Shortest Interview Guideline 3

Attributes where base data is not going to be kept or otherwise queried should be combined.

Turning 2 or 3 questions into 1 shorter question usually shortens an interview.

Example:

In the example rule above, notice the conclusion can be inferred by asking only one question instead of three questions.

 

Shortest Interview Guideline 4

Screens, booleans, and containers should show if "control collects relevant information".

This allows OPA to determine relevancy. We try to keep visibility rules (extra rule writing) to a minimum. If OPA can determine what to ask on its own, it saves both work and shortens interviews. [Note, as of November 2017, If two attributes can be linked by a shortcut rule, then they should generally not be collected on the same screen.]

Example:

For each question, set "Show if…"

Then, the questions will only show when needed, shortening the interview as such… In this case, answering yes to the first question removes the need for the second question.

Shortest Interview Guideline 5

Create any possible shortcut rules.

By definition, these shorten interviews. See "Capture implicit logic in rules" in the OPA help.

Example:

Note: Shortcut rules can be replaced with "DefaultWithUnknown()" rules in the latest versions of OPA. The primary reason for using a shortcut rule would appear to be to maintain natural language syntax. The verdict is still out whether shortcut rules, interview default values, or default functions are better; due to the warning on DefaultWithUnknown(): This function should be used with caution, since additional data can cause decisions to change. Default functions may provide more default consistency across channels and earlier determinations.

Shortest Interview Guideline 6

Every question should default to the most likely value or provide hint text.

- A question already answered provides for a shorter interview.

No example required.

Note: When creating defaults, several options are available in OPA (with trade-offs.) The verdict is still out regarding whether shortcut rules, interview default values, or default functions are better; due to the warning on DefaultWithUnknown(): This function should be used with caution, since additional data can cause decisions to change. Default functions may provide more default consistency across channels and earlier determinations. Note, a dynamic default can be updated live by evaluating a rule that uses data on the same screen.

 

Shortest Interview Guideline 7

The base attributes most responsible for the value of the top-level interview goal should be collected first.

The sooner the top-level goal is known, the fewer questions need to be asked, so the shorter the interview. This includes asking questions that help default future answers. It is sometimes necessary to ask a question whose sole purpose is to provide defaults for many other base attributes.

 

Questions most responsible for the value of the top-level goal can generally be identified as follows:

1.) The base attribute is the "fewest" levels deep.

2.) Other base attribute default values or their visibility depends upon the question

3.) If the base attribute conjunction is "OR", the answer distribution is expected to be most commonly answered in the positive.

4.) If the base attribute conjunction is "AND", the answer distribution is expected to be most commonly answered in the negative.

5.) The question is mandatory (cannot be left unknown or uncertain).

6.) The question is a boolean.

 

Shortest Interview Guideline 8

Screens / questions of interest to the business but not the determination should be put after the questions for the primary determination.

As a rule, the business may have further information to collect depending on the determination made. For shortest interviews, this information is best put in screens after the determination. Contact information such as phone numbers and mailing addresses are examples that are asked last to shorten an interview. These attributes are generally free-form text.

 

Example:

Register new users and collect their billing / mailing addresses after they have been vetted by OPA.

 

Exception:

There is an obvious exception to this guideline. Collecting entity identifiers and sex to aid in asking unambiguous questions may shorten an interview per advice from the General Guideline.

 

Shortest Interview Guideline 9

Projects should pay additional attention to the time spent per screen and interview duration charts available on the OPA hub after the August 2017 release of OPA. Use this data periodically to revise attribute collection.

 

A reasonable approach may be to monitor actual interviews, analyse results and keep trying changes that might lower the averages (perhaps within a time/cost limit). This process could be accelerated by using past data as tests against new interview tweaks. However, that approach won't guarantee an improvement unless the next set of data happens to be identical to the analysed set of data. In short, the best that can be reasonably achieved is to test certain assumptions and measure actual experience. As Matt Sevin, from Oracle says: “Past performance does not guarantee future performance, nor do the assumptions that seem to improve one interview necessarily imply similar results in another policy model.”

 

Example:

 

 

Shortest Interview Guideline 10

Maximize use of "hide" for all controls.

 

The less text that a user must read, the shorter the interview.

 

Negative examples:

Check that all parent controls do not have these settings:

Shortest Interview Guideline 11

Restrict answers (avoid non-granular answers). Rearrange attributes so that earlier attributes can restrict later attributes.

 

Reduce the granularity and abundance of answers and specifically avoid free-form answers. In general, have shorter interviews by asking fewer questions and minimizing the quantity of possible answers. While this may appear obvious, many business users forget that while interesting, detail is not always necessary. This is a primary reason why booleans are preferred. In many cases, numeric attributes can be converted to boolean by gaining knowledge from prior attributes.

 

Examples:

Assume a determination is dependent upon whether a client lives in NY. Instead of asking for State of residence and then checking if the State is NY, ask only whether the client resides in NY (true/false).

Assume financial aid is available for students who make less than 20,000 a year if the student is over the age of 25. Don't ask the student's specific salary ranges or specific age, ask if the student is over 25 (true/false), then ask if the student makes less than 20,000 a year (true/false).

 

Shortest Interview Guideline 12

Provide dynamic default number of entity instances.

 

Use initial attributes to dynamically determine the number of required entity instances. Entity instances require more time and thought by the end user. Avoid having the end user specifically think about creation / deletion of entities.

 

Shortest Interview Checklist

Use the following checklist for guideline compliance. Scoring is utilized as a quality assessment to measure maturity of an OPA implementation for shortest interview.

Scoring is as follow:

0 = Not in use

1 = Partially available and/or partially used by the project

2 = Available and in-use by the project

Quality Check

Analysis

OPA interviews follow the whitepaper "Oracle Policy Automation Best Practice Guide for Policy Modelers" provided by Oracle.

0 / 1 / 2

There is a single top-level interview goal for the primary determination. No other goals are defined in the interview.

0 / 1 / 2

Until the primary determination is made, all attributes collected in the interview are conditions relevant to the single top-level interview goal or provide many default values to conditions for the single top-level goal.

0 / 1 / 2

Attributes where base data is not going to be kept or otherwise queried are combined.

0 / 1 / 2

Screens, booleans, and containers should show if "control collects relevant information".

0 / 1 / 2

Any possible shortcut rules have been created.

0 / 1 / 2

Every question defaults to the most likely value or provides hint text.

0 / 1 / 2

The base attributes most responsible for the value of the top-level interview goals are collected first.

0 / 1 / 2

Screens / questions of interest to the business but not the determination are put after the questions for the primary determination.

0 / 1 / 2

Projects pay additional attention to the time spent per screen and interview duration charts available on the OPA hub after the August 2017 release of OPA. This data is used periodically to revise attribute collection.

0 / 1 / 2

Use is maximized for "hide" for all controls.

0 / 1 / 2

Answers are restricted to small sets and rearranged so that earlier attributes can restrict later attributes.

0 / 1 / 2

The default number of entity instances is provided dynamically.

0 / 1 / 2

 

 


Please see all the "disclaimers" from the prior post on Quality Guidelines.  I hate repeating myself.

 

This set of shortest interview guidelines is just a draft start.  It requires community feedback.  It has been created because I noticed a pattern where clients ask how to use OPA to shorten interviews (especially screening for eligibility.) 

 

It needs work, but is a start of guidance specific to that problem.  See the attached word document.  Provide constructive critique.  Take it or leave it. 

 

It would be nice to post a better document in another month or two that contained positive community feedback.

 

Shortest Interview Checklist

Use the following checklist for guideline compliance. Scoring is utilized as a quality assessment to measure maturity of an OPA implementation for shortest interview.

Scoring is as follow:

0 = Not in use

1 = Partially available and/or partially used by the project

2 = Available and in-use by the project

Quality Check

Analysis

OPA interviews follow the whitepaper "Oracle Policy Automation Best Practice Guide for Policy Modelers" provided by Oracle.

0 / 1 / 2

There is a single top-level interview goal for the primary determination. No other goals are defined in the interview.

0 / 1 / 2

Until the primary determination is made, all attributes collected in the interview are conditions relevant to the single top-level interview goal or provide many default values to conditions for the single top-level goal.

0 / 1 / 2

Attributes where base data is not going to be kept or otherwise queried are combined.

0 / 1 / 2

Screens, booleans, and containers should show if "control collects relevant information".

0 / 1 / 2

Any possible shortcut rules have been created.

0 / 1 / 2

Every question defaults to the most likely value or provides hint text.

0 / 1 / 2

The base attributes most responsible for the value of the top-level interview goals are collected first.

0 / 1 / 2

Screens / questions of interest to the business but not the determination are put after the questions for the primary determination.

0 / 1 / 2

Projects pay additional attention to the time spent per screen and interview duration charts available on the OPA hub after the August 2017 release of OPA. This data is used periodically to revise attribute collection.

0 / 1 / 2

Use is maximized for "hide" for all controls.

0 / 1 / 2

Answers are restricted to small sets and rearranged so that earlier attributes can restrict later attributes.

0 / 1 / 2

The default number of entity instances is provided dynamically.

0 / 1 / 2

WARNING: This post has been retired and is retained here only for historical purposes.

Instead, use the more recent post: Oracle Policy Automation - Shared Policy Quality Guidelines

Unfortunately, we have had some past issues with OPA policy quality.  We have had rulesets which are almost unreadable and un-maintainable (looking like poorly written java code), in spite of OPA being natural language.  Now, we try to set expectations ahead of time, especially with  vendors that do not specialize in OPA.  This is leading to more strict guidance on quality.

 

It can only be assumed other Oracle Policy Automation projects around the world are having this problem.  In the spirit of providing one possible modifiable template for measuring quality, here is our draft quality guidelines and checklist for shared policy.  The attached draft word document elaborates this checklist.

 

A few disclaimers:

 

  • This checklist and attached guidelines are in draft form.
  • This checklist (or parts of it) isn't for everyone.
  • This checklist is not approved, connected, endorsed, etc, by Oracle.
  • This checklist has not been finalized by NY and is currently opinionated.
  • The purpose of the checklist was to provide a very high bar for shared OPA policy in a shared library, as opposed to general OPA usage.
  • The checklist and measure is considered "good enough" and "better than nothing" right now, as opposed to perfect. (Perfect is the enemy of good enough.)
  • Attempts have been made to harmonize this checklist with Oracle guidance, but if there is found to be differences, then best to follow Oracle guidance.  Checklists get outdated.

 

We are open to suggestions that improve OPA quality.  Please feel free to provide constructive comments below.

 

Checklist for Compliance

The following checklist is use for compliance to this guideline.  The only requirement for compliance is that all the mandatory requirements be met.  Scoring is utilized as a quality assessment to measure maturity of an OPA implementation.

Scoring is as follow:

0 = Not in use

1 = Partially available and/or partially used by the project

2 = Available and in-use by the project

Quality Check

Analysis

OPA is being used for rule lifecycle management

T / F

Overarching policy outcomes are defined in OPA

T / F

Substantive OPA Policy Rules are reviewed by a lawyer and/or agency policy analyst

T / F

OPA is being used to assist in mining rules from source policy and/or legislation

T / F

OPA is being used for rule discovery and rule verification via analysis of existing data

0 / 1 / 2

OPA is being used to document attributes needed by an application in determining outcomes

0 / 1 / 2

OPA is being used for impact analysis of rules

0 / 1 / 2

OPA production rules are primarily used to determine outcomes defined by Agency policy and/or legislation

T / F

OPA production rules need visibility by the business

T / F

OPA production usage provides decision reports

T / F

OPA production rules provide "temporal reasoning"

0 / 1 / 2

Substantive, procedural, and visibility rules are properly separated

T / F

Traceability is provided from all substantive rules to source material

T / F

Substantive rules are in Natural Language

T / F

Rules are written to be read by non-OPA analysts

0 / 1 / 2

Production rules documents only contain operational rules

T / F

All OPA rulesets have a design document

T / F

OPA rules within a document are "on topic"

0 / 1 / 2

OPA only receives data originating from the rule consumer

0 / 1 / 2

OPA should determine outcomes for "I don't know" inferences

0 / 1 / 2

Each ruleset is translated into a language other than English

0 / 1 / 2

All Microsoft Word rule documents must have a TOC (Table of Contents)

T / F

Booleans attributes are never conclusions in word tables

T / F

Rules should not go deeper than level 3

0 / 1 / 2

Declarations are put in the first Excel worksheet and rules in subsequent sheets

T / F

Excel is used when source material is in a table, to implement rate tables, or there are multiple conclusions from the same conditions

0 / 1 / 2

All attributes must be properly parsable and parsed by OPA

T / F

Production projects can be debugged via the OPA debugger

T / F

Projects redefine "the current date"

T / F

There is not sequencing among policy rules

T / F

All substantive policy conclusions have unit test cases

T / F

Projects plan OPA upgrades once per quarter

T / F

List Items are turned into boolean attributes before using them as conditions

0 / 1 / 2

An ability to regression test with production data has been implemented

0 / 1 / 2

An OPA quality checklist is utilized

0 / 1 / 2

Public names are created where possible

T / F

Public names follow a naming guideline

T / F

Entity's identifying attributes are provided

T / F

Entities and relationships are only created when the rules require them for clarity in dealing with repeating attributes

0 / 1 / 2

Rule text should follow Oracle guidelines for entities, relationships, and attributes

0 / 1 / 2

Design and rule documents should contain description of relevant entities and relationships

0 / 1 / 2

Data saved from OPA can be re-loaded into OPA

0 / 1 / 2

Only the initial rules to determine an outcome should avoid effective dates via temporal logic

0 / 1 / 2

Rate tables should be temporal in Excel

0 / 1 / 2

Rules should not be deleted after they are used in production

0 / 1 / 2

Interviews are created with Accessibility warning level of WCAG 2.0 AA or better

T / F

Interviews have goals that support relevance of collected attributes

T / F

All determinations (including those with interview screens) are available as web services

T / F

OPA "Relevancy" is used for all screens and attribute collection

0 / 1 / 2

Policy determination rules are developed prior to developing interview screens

0 / 1 / 2

All entities, personal attributes, headings, and labels have names substitution

0 / 1 / 2

Attribute text should not be changed on screens

0 / 1 / 2