Пропустить навигацию
Paul Fowler

OPA Integrations Map Updated

Опубликовано: Paul Fowler 23.07.2018

Attached is an updated map.  See the OPA Architecture Map below for more information.

 

OPA Map 1.1.jpg

Paul Fowler

OPA on Docker

Опубликовано: Paul Fowler 19.03.2018

I want to give a shout-out to Brandon Belcher who posted the following in OPA Forums:

 

OPA on Docker

 

From Brandon:

 

I was reviewing some work I started a while back containerizing OPA, and decided it was doing little good sitting in a dusty corner of my hard drive.  Hopefully someone finds these repos useful!

 

Docker OPA Hub

Weblogic Configuration

This runs two containers from official images. It installs OPA after the containers have started. It is useful for testing the OPA installer on Oracle supported deployments (Oracle JRE, Weblogic, and MySQL).

 

Tomcat Configuration

This is a completely unsupported configuration (OpenJDK, Tomcat, and MySQL) but is much faster since it builds streamlined images with OPA pre-installed. This can be useful for developers who want a local Hub install but with a smaller footprint than a VM.

 

Vagrant OPA Hub

And finally if you want a quick and easy OPA Hub but prefer virtual machines, here's a minimal-effort script for creating an OPA Hub on VirtualBox.  This, like the Tomcat Docker config, is not supported, but is quick and easy for dev use.

 

Feedback welcome.  Enjoy!

 

-Brandon

Disclaimers: The following is my opinion and I am open to revision.

 

Why not always use AI by itself?

I get this question asked on occasion.  Some bright person will ask "why not have AI learn the legacy system, learn the rules, and take it from there?"

 

  1. Primarily - AI decisions are not easily auditable. (There are ways, but it isn’t a sweet spot.)
  2. AI provides a probabilistic capability not always in conformance to law or intent.
    • AI, for instance may unknowingly violate discrimination laws.
      • AI doesn’t care that federal law may protect from discrimination based on age, race/color, national origin, creed, sex, disability, and predisposing genetic characteristics.
      • AI doesn’t care that state law may protect from discrimination based on sexual orientation, military status, pregnancy-related condition, predisposing genetic characteristics, familial status, marital status, domestic violence victim status, prior arrest record, previous conviction record, and gender identity.
    • If base data shows a bias, AI may continue and may even promote that bias going forward.
  3. AI isn’t programmed, it is “trained” making exact modifications more challenging.  (There are ways, but it isn’t the sweet spot of AI.)

 

Why not always use OPA by itself?

  1. OPA may not adapt quickly to real-time changes in environment that may impact a determination.  (There are ways, but it isn’t a sweet spot.)
  2. OPA doesn’t have in-built capabilities for real-time policy impact analyze through large ever-changing datasets.
  3. OPA’s what-if capability is based on having the rules already defined and making changes to existing rules.  Sometimes the rules are not known up-front.

 

AI and OPA together?

  1. Together they can be a good fit for events and big data problems combined with policy.
  2. Both are trying to make determinations.
    • OPA has traditionally had weaknesses with real-time event and probabilistic inputs to its inferrencing - which AI is good at.
    • Other rules engines, such as drools, etc, historically a better fit for "event driven" rules - AI can fill this gap for OPA.
    • OPA and chatbots don't easily integrate out of the box - AI and chatbots are made for each other.
    • OPA won't analyze photographs, voice, video, files, etc...  - AI will analyze binary evidence itself.
    • OPA doesn't mine data for rules and correlation - AI can mine data and apps.
    • AI has issues with inferring results from policy and/or human intent - OPA is great at inferrence from policy and human intent.
    • AI by itself doesn't always put a "human in the loop" for assistance and is not easily understood by business users - OPA is easily understood by business users.
    • AI conversations are not always meaningful - OPA conversations are meaningful.
  3. There appears to be a symbiotic relationship, where each products weaknesses are fixed by the other products strengths.

 

Neither product takes long to set up.  This isn’t 3+ months of infrastructure deployment if the cloud is utilized…  I assert that the hardest part is getting access to the big data and source policy, not integrating or using the tools.

 

Some currently viable OPA / AI Scenarios

Case 1 – Legacy Rule Discovery

Case 2 – Anomaly detection as part of an interview

Case 3 – Decisions Augmented by Real-Time Big Data Event Analysis

  •   Classifications and Predictions
  •   Feature Extraction
  •   Diagnosis and Troubleshooting

 

(I was going to add a 4th case on AI chatbot with OPA to provide human in the loop, but the architecture is evolving too quickly right now.  I may come back to that in the future.)

 

Case 1 – Legacy Rule Discovery

Assertions:

  • Optimal business determinations take input from at least two groups:
    • Policy / Legislative Analysts
    • Data Scientists
  • OPA / AI provides an analysis intersection for these two groups.
  • “Legacy” can mean migration from a historical system OR a legacy can be a running system where rules need optimization…
  • It is important to understand that data can create its own rules in a running system and these rules may need to be exploited by the business for gain.

Case 1 process:

 

This case 1 is predicated on having legacy data or system and some knowledge of the existing system’s inputs / outputs…

  • First pass uses OPA to implement rules as “expected”.
  • A process using AI follows to refine the results:
    • OPA what-if analysis is used to determine OPA result accuracy from existing data.
    • AI finds correlations and rules to the mismatched results from the existing data.
      • AI determines base data that impacted the outcomes
      • AI determines relationships of base data to outcomes
    • OPA rules are revised with new findings from AI as verified by the business.
  • Process is repeated until OPA rule accuracy at acceptable levels.

 

Note: If the legacy system is available, AI can be utilized to run the legacy system repeatedly and more accurately assist with base data impact and correlation.

 

Final note on Case 1.  In my experience, usually the issue needing both OPA / AI arises when a legacy system is being decommissioned by a different organization than the organization now responsible for the business application.  Replacement of 30+ year old legacy systems may also rise to this level of need.

 

Case 2 – Anomaly detection as part of an interview

Assertions:

  • Optimal business determinations will detect anomalies early.
  • AI provides a near real-time anomaly determination based on examining the data.
  • Dealing with anomalies is a policy question and therefore a good fit for OPA.

 

OPA/AA Intersection is in providing score data from analytics to OPA

 

Case 2 is predicated on having OPA persist attributes to big data store utilized by AI.

  1. OPA provides new attributes to AI system that looks for anomalies.
  2. If AI finds anomalies, such as irregular payment, irregular household compositions, etc, OPA is notified via setting of a base attribute from the AI system.
  3. OPA can either notify the interviewee for verification, notify the interviewing system for follow-up, or simply record this result in an audit field for later analysis.  The rules on what to do with anomalies are OPA human-driven rules.

 

Case 3 – Decisions Augmented by Real-Time Big Data Event Analysis

Assertions:

  • AI can classify / predict probable outcomes when OPA collects incomplete data and feed that back to OPA.  This includes predicting additional programs the client may be interested in.
  • AI can extract features to enhance interviews. For instance AI can evaluate images and large text provided in an interview.
  • AI can diagnose data provided in an interview. AI can help troubleshoot. (Possibly with chatbots.)

 

This is a simpler process than it looks.

  1. AI is put into a real-time feedback loop with OPA.
  2. AI draws from a big-data store to make determinations about current OPA data.
  3. After OPA is done, the OPA data is added to the big data store.

 

What are the needed OPA / AI integrations for the above cases?

Currently, the best / strongest methods I have found are machine learning tools that read database data and write that data back to the database.

It is then OPA's responsibility to provide input to AI through a database, or to query the database for the results.

This type of integration isn't very hard, but it also isn't "business user friendly".  The integrations need to be more obvious during business and data analysis.

 

One of the easiest integrations of AI and OPA is in big data and statistics / probabilities.

I put a related OPA probabilities blog post here:  Logic Puzzle - OPA Determination with Probability

The statistics in the probability blog post example were typed into Excel.  But, a much better source for the statistics would be an AI engine.  That would allow real-time statistics to be used with OPA policy to adjust business outcomes.

 

In the future, we will know that OPA / AI has been well integrated when I can zip up an OPA file that demonstrates both together.  At the moment, I can't zip up the AI material.  (Another reason AI is not prime without OPA.)

 

I hope for more research into the following:

  • Stronger Service Cloud / Siebel integration with AI.
  • Generic OPA connectors to databases
    • Assertion: AI tools built to retrieve and provide data to databases.
    • Assertion: Shared data is current best mechanism to interact with AI
    • Examples (my current preferences):
      • A solution such as Mantis should be utilized or built into OPA by Oracle
      • OPA should have a solution to dynamically create a data store for attributes tagged by modeler.
  • AI integration during policy modeling by the analysts.

 

My final hope is the above blog post creates some discussion / thinking in the community.  I assert that AI and OPA are an excellent fit to combine machine and human determinations.

 

Comment if you have additional thoughts.

A probability problem comes up in policy that needs frequent solving:

 

Do we immediately send police to a residence of suspected activity based on probability the situation will escalate?  Will a person likely skip bail? How likely is a benefit to be applicable after only a preliminary screening?

 

The challenge is that sometimes probability is involved.  We can't immediately send police to every event all the time.  We must sometimes be selective and our policy needs to allow for policy probability in some parts of our determinations.

 

Puzzle #5

 

The local department of criminal justice has provided the following statistics in determining the person’s probability of missing court (these statistics are made-up).  The overall probably of missing a court appointment is 45%.  Of those who miss court, 40% are in the local community, 80% had an outstanding warrant, and 10% had a job.

 

Allowing for uncertainty and collecting 1) whether a person is a member of the community, 2) has an outstanding warrant, and/or 3) has as job, write an OPA policy that implements the following:

As any FYI, my collection screen looks like this:

 

 

The solution to this problem is attached with a more "generic" structure for solving problems of "likelihood".

 

The solution is simple in that a one page word document sets up the math.  A spreadsheet contains the list of question attributes used in screens to refine the final probability.  Questions which refine the probability can be added into the spreadsheet.  You can have 2-3 questions such as in this puzzle, or you can have several hundred.

 

A theoretical introduction to the solution can be found here: https://www.askiitians.com/iit-jee-algebra/probability/bayes-theorem.aspx

There are also many good youtube videos on Bayes reasoning: https://www.google.com/search?q=youtube+bayes+theorem&oq=youtube+bayes+theorem&aqs=chrome..69i57j69i64.3865j0j7&sourceid…

 

I will have a future blog post where I start to discuss intersection with AI and specifically with Machine Learning.

Paul Fowler

OPA Architecture Map 1.0

Опубликовано: Paul Fowler 10.01.2018

Since this is an architecture blog, I should post some architecture assistance, huh?

 

I attached a PDF with embedded links.  This OPA architecture map is meant to only be a starting point for architects.

 

Every element in the attached documents has URL links to documentation so that the elements can be studied.

 

I figure an hour with this map, and most experienced IT architects will have the logical OPA technical architecture concepts understood.

 

Disclaimers:

  • I don't work for Oracle and this is not authoritative.
  • I recognize the architecture view is not a standard architectural view by any stretch.
  • Components and Interactions shown are not necessarily everything available.
  • This map was created in a morning, so feel free to add comments to this blog about mistakes I made.

 

OPA Map 1.0.jpg

Paul Fowler

Selecting Pseudo-Random in OPA + an OPA game...

Опубликовано: Paul Fowler 06.01.2018

Today's post is a bit unique and should leave some OPA developers at Oracle scratching their heads and asking "why?"  My answer - "why not?"

 

Have you ever wanted to select questions at random to ask during an OPA interview, in order to test knowledge?  Can you think of any other reason to have OPA generate Pseudo-Random numbers?  I did - I quickly wrote a game using OPA.

 

Sure, you could use Rulescript, an API, or Javascript to get random numbers, but I chose to implement the common Linear Congruent Generator for random numbers using OPA itself.  I use the same routine and parameters as the Java libs, Posix, etc...  So, I think this is a pretty good random number generator.

 

My "Random Numbers" project is attached and also the classic game of "Bagels" from the 1978 issue of ATARI magazine.   I implemented the game using OPA and the random number generator so that example random number generator usage can be explored.  Plus - its a game.

 

The random numbers project from this blog is easily included in other projects as an "inclusion". 

 

There is a "Random Number Initialization.docx" rule file that has the following parameter settings for the random number generator.  These example settings would generate 100 random values between 1-10,000 inclusive.  It puts the values into "the random number value" in the random number entity.  You are expected to change these settings for your project.  The other rule file contains my random number generator logic and shouldn't be touched unless you know what you are doing. 

 

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Random number settings.  Separate from the rules, in case I want to change the random number engine…

 

These next 4 values can be overridden to provide more / less random numbers and range… the rndm seed defaults to a value from the date / time.

the seed = the rndm time seed

the rndm number count = 100

the rndm range low = 1

the rndm range high = 10000

 

if the following statement is "the random values are unique", then the rndm number count must be less than the random number range.  This generates non-repeating numbers.

 

This next statement must be either:

  • the random values are unique
  • the random values are not unique

the random values are unique

Disclaimer: This next puzzle wasn't put up in the Oracle Forums, because it doesn't really show off OPA.  Indeed, the puzzle takes away strong OPA capabilities and then tries to see if people can still use OPA to solve the problem.   On the bright side, this is still a puzzle for members of the OPA community that like to solve puzzles in their free time. (Yes, some of us enjoy keeping up our skills that way.)

 

The broken keyboard puzzle. I started to invent a story, but the story was too juvenile.  So instead, here is the puzzle:

 

attr d = (attr a + attr b + 5) * attr c

 

That is a very simple rule, as long as you write it with numbers and symbols such as 5, =, (, +, *, etc…

 

To solve this puzzle, write the equivalent of that very simple rule in OPA without using any number or symbol characters such as 0-9,=,+,*,(,),[,],-,”,’, etc…  You can’t use “5”, can’t use “=”, can’t use “(“ or “)”, can’t use “+” or “*”… and so on…

 

Inputs attr a, attr b, and attr c are guaranteed to all be integers greater than or equal to 1 and less than or equal to 100.

 

Trust me, that last guarantee simplifies the puzzle and adds a few more solution options…  However, this still isn’t an obvious puzzle to solve. 

 

A solution project is attached - it has some OPA tricks in it.

This puzzle is an intermediate to advanced puzzle.  It isn't long, but it takes a lot of thought.  Stephen French was the first to solve it in the Oracle Community.  Kudos.

 

 

Puzzle solving requirement:  OPA must solve the puzzle immediately when debug is pressed.  When Debug is hit, using rules, OPA must decide the correct solution.  In other words, OPA must be fed conditions to come up with the correct solution, OPA cannot be directly fed a single correct answer, except of course feeding OPA data and rules based on the puzzle itself.  [We know the names of the three contestants, so it is o.k. to provide OPA those names.]

 

How would you write rules to have OPA solve the following puzzle immediately when Debug is pressed:

 

Isaac and Albert were excitedly describing the result of the Third Annual International Science Fair Extravaganza in Sweden. There were three contestants, Louis, Rene, and Johannes. Isaac reported that Louis won the fair, while Rene came in second. Albert, on the other hand, reported that Johannes won the fair, while Louis came in second.

 

In fact, neither Isaac nor Albert had given a correct report of the results of the science fair. Each of them had given one correct statement and one false statement. What was the actual placing of the three contestants?

 

Care to try using rules in OPA to determine the correct placing of the three contestants?

 

A project with the solution is attached.

 

The OPA debug screen should look similar to this for a solution:

Guidelines have been updated 12/18/2017 based on a lot of feedback.

 

All the disclaimers still apply.  These are sample guidelines used by a real organization.  In most instances, these are meant to be served as template to be modified.  See the attached document.

 

Appendix F: Checklist for Compliance

The following checklist is use for compliance to this guideline.  The only requirement for compliance is that all the mandatory requirements be met.  Scoring is utilized as a quality assessment to measure maturity of an OPA implementation.

Scoring is as follow:

0 = Not in use

1 = Partially available and/or partially used by the project

2 = Available and in-use by the project

Quality Check

Analysis

OPA is being used for rule lifecycle management

T / F

Overarching policy outcomes are defined in OPA

T / F

Substantive OPA Policy Rules are reviewed by a lawyer and/or agency policy analyst

T / F

OPA is being used to assist in mining rules from source policy and/or legislation

T / F

The OPA project has separate roles for lawyer / policy analyst, OPA policy modeler, OPA interview designer, and OPA technical integrator / developer

0 / 1 / 2

OPA is being used for rule discovery and rule verification via analysis of existing data

0 / 1 / 2

OPA is being used to discover attributes needed by an application in determining outcomes

0 / 1 / 2

OPA is being used for impact analysis of rules

0 / 1 / 2

OPA substantive rules are primarily used to determine outcomes defined by Agency policy and/or legislation

T / F

OPA substantive rules need visibility by the business

T / F

OPA provides decision reports

T / F

OPA rules provide "temporal reasoning"

0 / 1 / 2

Substantive, procedural, and visibility rules are not combined

T / F

Traceability is provided from all substantive rules to source material

T / F

Substantive rules are in Natural Language

T / F

Rules are written to be read by non-OPA analysts

0 / 1 / 2

Production rules documents only contain operational rules

T / F

All OPA rulesets have a design document

T / F

OPA rules within a document are "on topic"

0 / 1 / 2

OPA only receives data originating from the rule consumer

0 / 1 / 2

OPA should determine outcomes for "I don't know" inferences

0 / 1 / 2

All Microsoft Word rule documents must have a TOC (Table of Contents)

T / F

Booleans attributes are never conclusions in word tables

T / F

Rules should not go deeper than level 3

0 / 1 / 2

Excel is used when source material is in a table, to implement rate tables, or there are multiple conclusions from the same conditions

0 / 1 / 2

All attributes must be properly parsable and parsed by OPA

T / F

Production projects can be debugged via the OPA debugger

T / F

Projects redefine "the current date"

T / F

All substantive policy conclusions have unit test cases

T / F

Projects have regression test suites

T / F

Projects plan OPA upgrades once per quarter

T / F

List Items are turned into boolean attributes before using them as conditions

0 / 1 / 2

An ability to regression test with production data has been implemented

0 / 1 / 2

An OPA quality checklist is utilized

T / F

Public names are created for integration / mapping with other applications

0 / 1 / 2

Public names follow a naming guideline

T / F

Entity's identifying attributes are provided

T / F

Entities and relationships are only created when the rules require them

0 / 1 / 2

Rule text should follow Oracle guidelines for entities, relationships, and attributes

0 / 1 / 2

Design and rule documents should contain description of relevant entities and relationships

0 / 1 / 2

Data saved from OPA can be re-loaded into OPA

0 / 1 / 2

Only the initial rules to determine an outcome should avoid effective dates via temporal logic

0 / 1 / 2

Rate tables should be temporal in Excel

0 / 1 / 2

Rules should not be deleted after they are used in production

0 / 1 / 2

Interviews are created with Accessibility warning level of WCAG 2.0 AA or better

T / F

Interviews have goals that support relevance of collected attributes

T / F

All policy determinations are available as web services

T / F

OPA "Relevancy" is used for all screens and attribute collection

0 / 1 / 2

Policy determination rules are developed prior to developing interview screens

0 / 1 / 2

All entities, personal attributes, headings, and labels have names substitution

0 / 1 / 2

Attribute text should not be changed on screens

0 / 1 / 2

The small feedback I received indicates that people did not realize there are word docs and sometimes projects attached to these posts...

 

For instance, here is the content from the Shortest Interview Guidelines.  Comments are especially desired to improve Shortest Interview Guideline 7.

 

I would post the content from the quality guidelines, but it is really too long...

 

Intermediate Oracle Policy Automation – OPA Shortest Interview Guidelines

 

As an aid to quality, what I call the “OPA Shortest Interview Guidelines” help ensure that the shortest length of interview successfully obtains a primary determination.

 

Intent

  • Minimize the number of interview questions to get to a determination.
  • Minimize content that must be read to get to a determination.

Problem

You need to minimize the number of questions asked of a client. The result must be consistent across channels.

Interview questions have different levels of relevancy depending on the client.

Discussion

OPA should handle the complexity of determining question relevancy, and this is a first condition toward developing the shortest interview.

OPA's definition of relevancy is as follows:

1. Rule 1: An attribute's value is relevant if changing it could cause the conclusion of the rule to change.

2. Rule 2: Where a set of values are not relevant individually (through Rule 1) but are equally responsible for the value of the conclusion, then all values in the set are considered relevant.

3. Rule 3: All values that could be relevant if unknown values became known, are considered relevant.

 

Although elimination of irrelevant interview questions is critical to brevity, relevancy is not always true or false. For example, the use of limits in conjunction with greater than or less than comparisons can cause OPA to consider that an irrelevant attribute is relevant.


Some questions are often answered the same by all populations. If 95% of all interviewees are in-state, then asking for state residence up front may be less expeditious than asking about income.


Another technique for abbreviation is to combine questions to shorten an interview. Instead of asking three questions, an interview might combine them: "Are you over the age of 65, disabled, or blind?".


Shortest interview requires that the Oracle’s Rule principles be followed:

1. Each conclusion must be stated only once.

2. Each rule must have a comprehensive statement of conditions.

3. Each component of the rule must be clearly identifiable.

4. Each condition must itself be logically complete to determine the value of the condition.

5. Every rule must be knowable.

6. The order in which information is presented should not change the outcome of the rules.

 

Because questions can have different impacts based on the user, the following guidelines have emerged as an aid to arrange base attributes and questions for the shortest interview.

General Guideline

OPA interviews should follow Oracle’s whitepaper "Oracle Policy Automation Best Practice Guide for Policy Modelers", which provides guidelines on interview clarity which shortens the time to get to a primary determination.


The next guidelines augment the general guideline. They are specific to achieving the shortest average interview to get to a primary determination. These next guidelines may not be appropriate for other goals.


Shortest Interview Guideline 1

There should be a single top-level interview goal for the primary determination. No other goals should be defined in the interview. Having more than one top-level goal may cause OPA relevancy to ask additional questions that are not relevant to a primary determination.

Negative Example:

Suppose "the guideline 1 first goal is met" provides the primary determination…

In the above example, "the guideline 1 third condition is met" is an additional input asked that is not relevant to the primary determination. It will be asked and lengthen the interview… In this simple case the interview has been lengthened by 50%.

  

Shortest Interview Guideline 2

Until the primary determination is made, all attributes collected in the interview should be conditions relevant to the single top-level interview goal or should provide many default values to conditions for the single top-level goal.

- An interview asking questions that do not determine the primary goal will probably not create the shortest interview.

Negative Example:

Suppose "the guideline 2 goal is met" provides the primary determination…

If the collection screen is as follows, then Name and Address are not required. Interviewees may not want to give this information until they know whether they are "eligible" based on some determination.

 

Shortest Interview Guideline 3

Attributes where base data is not going to be kept or otherwise queried should be combined.

Turning 2 or 3 questions into 1 shorter question usually shortens an interview.

Example:

In the example rule above, notice the conclusion can be inferred by asking only one question instead of three questions.

 

Shortest Interview Guideline 4

Screens, booleans, and containers should show if "control collects relevant information".

This allows OPA to determine relevancy. We try to keep visibility rules (extra rule writing) to a minimum. If OPA can determine what to ask on its own, it saves both work and shortens interviews. [Note, as of November 2017, If two attributes can be linked by a shortcut rule, then they should generally not be collected on the same screen.]

Example:

For each question, set "Show if…"

Then, the questions will only show when needed, shortening the interview as such… In this case, answering yes to the first question removes the need for the second question.

Shortest Interview Guideline 5

Create any possible shortcut rules.

By definition, these shorten interviews. See "Capture implicit logic in rules" in the OPA help.

Example:

Note: Shortcut rules can be replaced with "DefaultWithUnknown()" rules in the latest versions of OPA. The primary reason for using a shortcut rule would appear to be to maintain natural language syntax. The verdict is still out whether shortcut rules, interview default values, or default functions are better; due to the warning on DefaultWithUnknown(): This function should be used with caution, since additional data can cause decisions to change. Default functions may provide more default consistency across channels and earlier determinations.

Shortest Interview Guideline 6

Every question should default to the most likely value or provide hint text.

- A question already answered provides for a shorter interview.

No example required.

Note: When creating defaults, several options are available in OPA (with trade-offs.) The verdict is still out regarding whether shortcut rules, interview default values, or default functions are better; due to the warning on DefaultWithUnknown(): This function should be used with caution, since additional data can cause decisions to change. Default functions may provide more default consistency across channels and earlier determinations. Note, a dynamic default can be updated live by evaluating a rule that uses data on the same screen.

 

Shortest Interview Guideline 7

The base attributes most responsible for the value of the top-level interview goal should be collected first.

The sooner the top-level goal is known, the fewer questions need to be asked, so the shorter the interview. This includes asking questions that help default future answers. It is sometimes necessary to ask a question whose sole purpose is to provide defaults for many other base attributes.

 

Questions most responsible for the value of the top-level goal can generally be identified as follows:

1.) The base attribute is the "fewest" levels deep.

2.) Other base attribute default values or their visibility depends upon the question

3.) If the base attribute conjunction is "OR", the answer distribution is expected to be most commonly answered in the positive.

4.) If the base attribute conjunction is "AND", the answer distribution is expected to be most commonly answered in the negative.

5.) The question is mandatory (cannot be left unknown or uncertain).

6.) The question is a boolean.

 

Shortest Interview Guideline 8

Screens / questions of interest to the business but not the determination should be put after the questions for the primary determination.

As a rule, the business may have further information to collect depending on the determination made. For shortest interviews, this information is best put in screens after the determination. Contact information such as phone numbers and mailing addresses are examples that are asked last to shorten an interview. These attributes are generally free-form text.

 

Example:

Register new users and collect their billing / mailing addresses after they have been vetted by OPA.

 

Exception:

There is an obvious exception to this guideline. Collecting entity identifiers and sex to aid in asking unambiguous questions may shorten an interview per advice from the General Guideline.

 

Shortest Interview Guideline 9

Projects should pay additional attention to the time spent per screen and interview duration charts available on the OPA hub after the August 2017 release of OPA. Use this data periodically to revise attribute collection.

 

A reasonable approach may be to monitor actual interviews, analyse results and keep trying changes that might lower the averages (perhaps within a time/cost limit). This process could be accelerated by using past data as tests against new interview tweaks. However, that approach won't guarantee an improvement unless the next set of data happens to be identical to the analysed set of data. In short, the best that can be reasonably achieved is to test certain assumptions and measure actual experience. As Matt Sevin, from Oracle says: “Past performance does not guarantee future performance, nor do the assumptions that seem to improve one interview necessarily imply similar results in another policy model.”

 

Example:

 

 

Shortest Interview Guideline 10

Maximize use of "hide" for all controls.

 

The less text that a user must read, the shorter the interview.

 

Negative examples:

Check that all parent controls do not have these settings:

Shortest Interview Guideline 11

Restrict answers (avoid non-granular answers). Rearrange attributes so that earlier attributes can restrict later attributes.

 

Reduce the granularity and abundance of answers and specifically avoid free-form answers. In general, have shorter interviews by asking fewer questions and minimizing the quantity of possible answers. While this may appear obvious, many business users forget that while interesting, detail is not always necessary. This is a primary reason why booleans are preferred. In many cases, numeric attributes can be converted to boolean by gaining knowledge from prior attributes.

 

Examples:

Assume a determination is dependent upon whether a client lives in NY. Instead of asking for State of residence and then checking if the State is NY, ask only whether the client resides in NY (true/false).

Assume financial aid is available for students who make less than 20,000 a year if the student is over the age of 25. Don't ask the student's specific salary ranges or specific age, ask if the student is over 25 (true/false), then ask if the student makes less than 20,000 a year (true/false).

 

Shortest Interview Guideline 12

Provide dynamic default number of entity instances.

 

Use initial attributes to dynamically determine the number of required entity instances. Entity instances require more time and thought by the end user. Avoid having the end user specifically think about creation / deletion of entities.

 

Shortest Interview Checklist

Use the following checklist for guideline compliance. Scoring is utilized as a quality assessment to measure maturity of an OPA implementation for shortest interview.

Scoring is as follow:

0 = Not in use

1 = Partially available and/or partially used by the project

2 = Available and in-use by the project

Quality Check

Analysis

OPA interviews follow the whitepaper "Oracle Policy Automation Best Practice Guide for Policy Modelers" provided by Oracle.

0 / 1 / 2

There is a single top-level interview goal for the primary determination. No other goals are defined in the interview.

0 / 1 / 2

Until the primary determination is made, all attributes collected in the interview are conditions relevant to the single top-level interview goal or provide many default values to conditions for the single top-level goal.

0 / 1 / 2

Attributes where base data is not going to be kept or otherwise queried are combined.

0 / 1 / 2

Screens, booleans, and containers should show if "control collects relevant information".

0 / 1 / 2

Any possible shortcut rules have been created.

0 / 1 / 2

Every question defaults to the most likely value or provides hint text.

0 / 1 / 2

The base attributes most responsible for the value of the top-level interview goals are collected first.

0 / 1 / 2

Screens / questions of interest to the business but not the determination are put after the questions for the primary determination.

0 / 1 / 2

Projects pay additional attention to the time spent per screen and interview duration charts available on the OPA hub after the August 2017 release of OPA. This data is used periodically to revise attribute collection.

0 / 1 / 2

Use is maximized for "hide" for all controls.

0 / 1 / 2

Answers are restricted to small sets and rearranged so that earlier attributes can restrict later attributes.

0 / 1 / 2

The default number of entity instances is provided dynamically.

0 / 1 / 2

 

 


Please see all the "disclaimers" from the prior post on Quality Guidelines.  I hate repeating myself.

 

This set of shortest interview guidelines is just a draft start.  It requires community feedback.  It has been created because I noticed a pattern where clients ask how to use OPA to shorten interviews (especially screening for eligibility.) 

 

It needs work, but is a start of guidance specific to that problem.  See the attached word document.  Provide constructive critique.  Take it or leave it. 

 

It would be nice to post a better document in another month or two that contained positive community feedback.

 

Shortest Interview Checklist

Use the following checklist for guideline compliance. Scoring is utilized as a quality assessment to measure maturity of an OPA implementation for shortest interview.

Scoring is as follow:

0 = Not in use

1 = Partially available and/or partially used by the project

2 = Available and in-use by the project

Quality Check

Analysis

OPA interviews follow the whitepaper "Oracle Policy Automation Best Practice Guide for Policy Modelers" provided by Oracle.

0 / 1 / 2

There is a single top-level interview goal for the primary determination. No other goals are defined in the interview.

0 / 1 / 2

Until the primary determination is made, all attributes collected in the interview are conditions relevant to the single top-level interview goal or provide many default values to conditions for the single top-level goal.

0 / 1 / 2

Attributes where base data is not going to be kept or otherwise queried are combined.

0 / 1 / 2

Screens, booleans, and containers should show if "control collects relevant information".

0 / 1 / 2

Any possible shortcut rules have been created.

0 / 1 / 2

Every question defaults to the most likely value or provides hint text.

0 / 1 / 2

The base attributes most responsible for the value of the top-level interview goals are collected first.

0 / 1 / 2

Screens / questions of interest to the business but not the determination are put after the questions for the primary determination.

0 / 1 / 2

Projects pay additional attention to the time spent per screen and interview duration charts available on the OPA hub after the August 2017 release of OPA. This data is used periodically to revise attribute collection.

0 / 1 / 2

Use is maximized for "hide" for all controls.

0 / 1 / 2

Answers are restricted to small sets and rearranged so that earlier attributes can restrict later attributes.

0 / 1 / 2

The default number of entity instances is provided dynamically.

0 / 1 / 2

WARNING: This post has been retired and is retained here only for historical purposes.

Instead, use the more recent post: Oracle Policy Automation - Shared Policy Quality Guidelines

Unfortunately, we have had some past issues with OPA policy quality.  We have had rulesets which are almost unreadable and un-maintainable (looking like poorly written java code), in spite of OPA being natural language.  Now, we try to set expectations ahead of time, especially with  vendors that do not specialize in OPA.  This is leading to more strict guidance on quality.

 

It can only be assumed other Oracle Policy Automation projects around the world are having this problem.  In the spirit of providing one possible modifiable template for measuring quality, here is our draft quality guidelines and checklist for shared policy.  The attached draft word document elaborates this checklist.

 

A few disclaimers:

 

  • This checklist and attached guidelines are in draft form.
  • This checklist (or parts of it) isn't for everyone.
  • This checklist is not approved, connected, endorsed, etc, by Oracle.
  • This checklist has not been finalized by NY and is currently opinionated.
  • The purpose of the checklist was to provide a very high bar for shared OPA policy in a shared library, as opposed to general OPA usage.
  • The checklist and measure is considered "good enough" and "better than nothing" right now, as opposed to perfect. (Perfect is the enemy of good enough.)
  • Attempts have been made to harmonize this checklist with Oracle guidance, but if there is found to be differences, then best to follow Oracle guidance.  Checklists get outdated.

 

We are open to suggestions that improve OPA quality.  Please feel free to provide constructive comments below.

 

Checklist for Compliance

The following checklist is use for compliance to this guideline.  The only requirement for compliance is that all the mandatory requirements be met.  Scoring is utilized as a quality assessment to measure maturity of an OPA implementation.

Scoring is as follow:

0 = Not in use

1 = Partially available and/or partially used by the project

2 = Available and in-use by the project

Quality Check

Analysis

OPA is being used for rule lifecycle management

T / F

Overarching policy outcomes are defined in OPA

T / F

Substantive OPA Policy Rules are reviewed by a lawyer and/or agency policy analyst

T / F

OPA is being used to assist in mining rules from source policy and/or legislation

T / F

OPA is being used for rule discovery and rule verification via analysis of existing data

0 / 1 / 2

OPA is being used to document attributes needed by an application in determining outcomes

0 / 1 / 2

OPA is being used for impact analysis of rules

0 / 1 / 2

OPA production rules are primarily used to determine outcomes defined by Agency policy and/or legislation

T / F

OPA production rules need visibility by the business

T / F

OPA production usage provides decision reports

T / F

OPA production rules provide "temporal reasoning"

0 / 1 / 2

Substantive, procedural, and visibility rules are properly separated

T / F

Traceability is provided from all substantive rules to source material

T / F

Substantive rules are in Natural Language

T / F

Rules are written to be read by non-OPA analysts

0 / 1 / 2

Production rules documents only contain operational rules

T / F

All OPA rulesets have a design document

T / F

OPA rules within a document are "on topic"

0 / 1 / 2

OPA only receives data originating from the rule consumer

0 / 1 / 2

OPA should determine outcomes for "I don't know" inferences

0 / 1 / 2

Each ruleset is translated into a language other than English

0 / 1 / 2

All Microsoft Word rule documents must have a TOC (Table of Contents)

T / F

Booleans attributes are never conclusions in word tables

T / F

Rules should not go deeper than level 3

0 / 1 / 2

Declarations are put in the first Excel worksheet and rules in subsequent sheets

T / F

Excel is used when source material is in a table, to implement rate tables, or there are multiple conclusions from the same conditions

0 / 1 / 2

All attributes must be properly parsable and parsed by OPA

T / F

Production projects can be debugged via the OPA debugger

T / F

Projects redefine "the current date"

T / F

There is not sequencing among policy rules

T / F

All substantive policy conclusions have unit test cases

T / F

Projects plan OPA upgrades once per quarter

T / F

List Items are turned into boolean attributes before using them as conditions

0 / 1 / 2

An ability to regression test with production data has been implemented

0 / 1 / 2

An OPA quality checklist is utilized

0 / 1 / 2

Public names are created where possible

T / F

Public names follow a naming guideline

T / F

Entity's identifying attributes are provided

T / F

Entities and relationships are only created when the rules require them for clarity in dealing with repeating attributes

0 / 1 / 2

Rule text should follow Oracle guidelines for entities, relationships, and attributes

0 / 1 / 2

Design and rule documents should contain description of relevant entities and relationships

0 / 1 / 2

Data saved from OPA can be re-loaded into OPA

0 / 1 / 2

Only the initial rules to determine an outcome should avoid effective dates via temporal logic

0 / 1 / 2

Rate tables should be temporal in Excel

0 / 1 / 2

Rules should not be deleted after they are used in production

0 / 1 / 2

Interviews are created with Accessibility warning level of WCAG 2.0 AA or better

T / F

Interviews have goals that support relevance of collected attributes

T / F

All determinations (including those with interview screens) are available as web services

T / F

OPA "Relevancy" is used for all screens and attribute collection

0 / 1 / 2

Policy determination rules are developed prior to developing interview screens

0 / 1 / 2

All entities, personal attributes, headings, and labels have names substitution

0 / 1 / 2

Attribute text should not be changed on screens

0 / 1 / 2

Paul Fowler

Oracle Policy Automation - OPA Strengths

Опубликовано: Paul Fowler 24.08.2017

I have an approach to OPA policy models.  I follow the guidance from Oracle on OTN.  I create design documents, etc...

 

But, then I do something more.  After my first draft design, I have a list of OPA Strengths that I check against my design.  I ask myself, has the design optimized for all of these strengths of OPA? 

 

Up until now, this has been a private list I check against my design and approach.  This is my way to check my own quality of deliverable.  I make sure the client gets as much on this list as possible.  I would challenge all the blog readers to do the same.

 

As you can see, OPA HAS A LOT OF STRENGTHS!  But, I go through them all, every single time.  Many of these strengths are not totally Out of the Box (OOTB) but require that they be accounted for.  They are easy to achieve, but only if the proper OPA patterns are followed.

 

   

StakeholderStrengthElaboration
AuditorsComplete Audit ReportsWhile workers / clients can get custom decision reports, auditors can get more complete audit reports saved in the system.
AuditorsHistorical decision recreationIt is possible to deploy OPA rules in a fashion that allows auditors to go back in time and re-run rules from a previous point in time.
AuditorsMatured ProductOPA has been developed as a product for over 20 years.
AuditorsTransparency and Traceability across all decisionsAudit reports and personalized advice can be generated (saved) for all decisions
AuthorAbstract Policy Logic from Application CodePolicy logic developed and maintained outside of application code
AuthorAlternate conclusions are automaticThere is no need to code else statement on a true/false determination in OPA
AuthorAttribute discoveryOPA can be used to document variables needed by an application team in support of a determination.
AuthorClear delineation of goalsOPA helps orient rules towards achieving actual "goals" (as opposed to rules interspersed in an app where the final goal of the rule is "fuzzy" at best.)
AuthorCollaborative Modeling and VersioningThe policy hub handles rule versioning and collaboration among rule authors.  A collaborative development lifecycle is provided.
AuthorConflict / Redundency IdentificationOPA will notify authors of conflicts / redundant rules as opposed to traditional coding where the rules will compile and conflicts may move to production.
AuthorConsistent TerminologyThe use of OPA encourages consistent terminology as a mechanism for linking rules together.   This is a good thing as it removes ambiguity in a project.
AuthorEasy Forms, Letters, and Summary documentsBI Publisher integration allows for easy forms and other documents to be generated during interviews using Word templates.  The output can be word, pdf, etc…
AuthorEasy GUI Interview DesignerOPA will create interviews without intervention, but organizing questions together, changing headings, etc, is all graphical and easily managed.
AuthorExtremely Agile Software DevelopmentThe primary development artifacts are the word and excel documents.   Rules can be created or changed and deployed in minutes.  Hot deploys are an easy option.
AuthorGranular deployment modelSubsets of Policy can be changed independently allowing for many more teams to work independently on Policy.
AuthorModel rules in many languages (25+ languages)Rules may be modeled in the language of the policy
AuthorOPA documentationWith over 20 years of advancement, OPA provides well-documented help for every feature.
AuthorOver 20 Sample ProjectsSample projects from licensing, to healthy eating, to insurance selection provided.
AuthorQuestions built from source materialQuestions, positive, negative, uncertain, and unknown forms of attributes are determined from the rule as written in OPA.  These can be overridden if desired.  If you have not yet seen this, this is very cool.
AuthorRemoval of need to track rule linkagesOPA links rules together (as opposed to code techniques where after one rule is fired, code must determine a process of next steps, data reuse, etc.)
AuthorRule AssistantOPA provides assistance in Word for new rule authors to quickly create rules.
AuthorRules Related to SourceOPA provides an ability to relate rules directly to the source material such as legislation
AuthorSimplify Complex Decision LogicDecision logic can be created in bite-size chunks that are encapsulated and abstracted
AuthorStrong / Fun OPA CommunityOracle Forums, Twitter, Facebook, LinkedIn, Blogs, Youtube, etc…
AuthorTemporal ReasoningChanges over time of Policy, Circumstance, and User Environment are easily incorporated into the logic. 
AuthorTranslations in Excel (Strong Multi-Lingual Capability)A simple Excel method of providing translations for client interviews is provided.  This includes support for custom languages.
AuthorWord / Excel ModelingTools familiar to BA's (Word and Excel) are used for authoring. Original policy is likely in Word and/or Excel.
ConsumersAccessibilityAccessibility achieved via WAI-ARIA 1.0 standard support
ConsumersAuto substitution of proper nouns, etc…Names of who, what, when, where are automatically substituted into future questions and screens once the information is known.
ConsumersAutomatic defaulting of answersOPA has multiple mechanisms for defaulting answers, from connecting to other datasources, to deriving the answers from previous knowledge gained.
ConsumersCross-Channel ConsistencyConsistent experience, consistent determinations...  The elimination of decision silos where workers in one environment create different determination.
ConsumersEvidence CollectionEvidence (various files, photographs, documents, etc) can be collected by OPA during interviews.
ConsumersFormsForms can be created and saved or printed during interviews
ConsumersOnly relevant questions are askedQuestions not needed are not asked as determined relevant by OPA (can be overridden)
ConsumersOPA has ability the to handle "I don't know"OPA has ability the to handle "I don't know" answers to questions and still determine outcomes if OPA later determines a path that can make the question no longer relevant.
ConsumersPersonalized AdicePersonalized Advice is provided to each client via Intelligent Interviews.  Client names, and situations can all be easily incorporated in the advice.
ConsumersPersonalized ExplanationsPersonalized Explanations (each client gets their own explanation) for either a "decision" or "indecision."
ConsumersRule performance20 years of rule inferencing algorithms + only asking relevant questions generall make OPA much faster than traditional code for complex determinations.
ConsumersShortest # questions to get the decisionOPA can be structures so the fewest questions are asked.
ConsumersSubstitution of gender pronounsOnce the sex of an individual is known, gender pronouns may automatically be utilized.
IntegratorsBatch capabilitiesPolicy can be implemented in a local batch processor or via a batch REST API
IntegratorsCAPTHA controlsCAPTHA functionality is included as part of OPA interviews.
IntegratorsCollection of evidence (file uploads)During interviews, the ability to upload files (PDF, Images, audio, video, etc) is included as part of OPA.
IntegratorsCollection of signatureElectronic signatures in OPA generated forms can be collected (for instance as evidence.)
IntegratorsData object mapping to RightNowSimple integration with Rightnow CRM data objects is provided
IntegratorsDeployment workflow for rulesetsRulesets lifecycles can be fully managed in the Oracle hub
IntegratorsEasy attribute mapping to a web service data sourceDrop-down mappings exist for web service end-points.
IntegratorsEasy mobile deploymentMobile device deployment is simple with iOS and Android app support in the stores.
IntegratorsHot-updating of business rulesNew or changed rulesets may be deployed anytime without impacting current users.
IntegratorsInterviews available as a web service (SOAP)All OPA services are available as SOAP web services
IntegratorsPublic/Private Cloud rule migrationRuleset are not coupled to either the public or a private cloud. The same rulesets can be deployed in either or both locations.
IntegratorsRightnow database mappingsDrop-down mappings exist for Service Cloud database integrations.
IntegratorsSubmit button perform REST invocationsInterview actions can redirect (with data) to invoke RESTful services.
IntegratorsTargeted Goals for ProcessesOnce policy is coded in OPA, the team has a “target” goal to develop processes against.  In government, processes usually support policy goals.  Many policy goals can be in OPA.
IntegratorsWeb Service IntegrationBefore during and after an interview OPA can synchronize with any web-service enabled enpoint.
IntegratorsWeb, Mobile, Portal, or Desktop interviews.Out of box support for multiple channels, including a mobile toolkit to integrate OPA into mobile apps.
Policy AnalystCentralized Policy ManagementPolicy rules can be maintained in a central repository reducing rule duplication in code.
Policy AnalystEasy Ability to Share PolicyRules are easily reused.  When policies impact more than one application, the policy logic does not need to be recreated.
Policy AnalystExisting population impact analysisOPA can aid authors in identifying the population that will be impacted by a rule change.  This is vital to preventing strong push-back after production rule implementation.
Policy AnalystIncreased Rule VisibilityActual rule implementation is not hidden in code. This helps protect individuals from ramifications of incorrect rule implementation.
Policy AnalystNatural Language RulesRules are in natural language allowing review by non-OPA staff.
Policy AnalystPolicy Scenario Capabilities (What-If?)Impact Analysis is enabled via both In-Memory Analytics and Excel testing in OPA.  Since all rules can be fired as appropriate this is generally more accurate than other methods.
Policy AnalystRegression TestingRegression testing is easy via Excel tests.  If modeled, new OPA outcomes can be verified against legacy system outcomes.
Policy AnalystRetroactive Rule Change SupportA typical problem in IT systems is a retroactive rule change and having to correct the  impacts.  OPA can be used to identify impacted clients on retroactive rule changes.
Policy AnalystRule Coverage AnalysisCoverage Analysis techniques can be used to find rules that have minimum and maximum impacts.
Policy AnalystRule Discovery SupportWhere legacy base data and decisions exist, but rules are difficult to determine, OPA can aid in recreating or validating expected business logic. 
Policy AnalystRule MiningOPA is great as an assistant when working though policy documents to discover rules.
Policy AnalystRule TraceabilityChanges introduced by new policies and legislation can be quickly linked to the Policy Rules in OPA to aid in change management.
Policy AnalystTime per Screen StatisticsHub provides how much time users are spending on average on each interview screen.  This can be used to optimize interviews.
Policy AnalystVerify Policy Changes ConvergePolicies changes can be evaluated for their completeness in OPA.  In some scenarios, new legislative policy can be analyzed in OPA while being authored.
QA TeamInterview accessibility checksAccessibility checks for Web Content Accessibility Guidelines (WCAG) levels are part of OPA.
QA TeamTest Cases are in excelTest cases in Excel allows for a quick and easy way to add test cases.
WorkersAbility to organize interviews into "Stages" and save via CheckpointsLong interviews can be broken up, saved, and returned to at a later time.
WorkersAccurate DeterminationsDeterminations are less subjective and less prone to human error.
WorkersChat WorkspaceWith Service Cloud, client chats and rule determination have been integrated together for better worker / client collaboration.
WorkersExplanations with every decision (fewer client complaints) Every decision comes with an explanation workers and clients can view.  Note this aids initial deployments as workers can help troubleshoot any unexpected rule impacts.
WorkersOffline InterviewsInterviews can occur in the field offline and uploaded the results uploaded at a later date.
WorkersReduced training timeWorkers do not need be trained in policy nuances to create accurate determinations.
WorkersWorker GeneralizationWorkers do not need to be as specialized in support of detailed policy understanding
Paul Fowler

Fixing an OPA example project...

Опубликовано: Paul Fowler 12.08.2017

You are about to read my first (and only) very minor complaint about OPA.  I apologize in advance to the OPA staff who have produced an otherwise exceptional (and I mean that) product.  The Oracle OPA consultants know their stuff.  The product is mature and adds actual business value.

 

I have been meaning to correct OPA example projects.  Some OPA examples show anti-patterns of OPA usage.  Then, I see these same anti-patterns by novice OPA developers who are making a lot of money off my clients.  Ouch!  I mean a few examples violate Jasmine's and other's best practices in gross fashion.  I already have issue with java/.net developers claiming to master OPA after a few classes and a few weeks of self-teaching, but it doesn't help if the OPA examples encourage poor OPA policy writing.

 

One project in particular bugged me tonight and I corrected a few problems with it in about 2-3 hours over a few beers.  Yeah, I am writing this buzzed.  My bad.

 

Actually, I would prefer to rewrite the example from scratch, but instead I opted to keep the interview almost exactly the same and only change the rule documents.

 

I did make a minor, minor adjustment to the interview.  I decided to have OPA only display relevant screens.  I couldn't let that one slide.  So sue me.

 

Which project "bugged me" tonight?  The EmployeeOrContractor example project makes several big errors and violates OPA best practices.  I attach an improved but not perfect modified version that does the same exact thing.

 

Fowler, what is wrong???  What is violated?

 

1) The project hides substantive rules in the interview screens (specifically in value lists). Don't believe me?

2) The project attributes do not reflect the interview questions on the screens meaning that ONLY the web interview screens make any sense.  Forget calling this project with a web service as you have to supply your own substantive scoring.

3) Screens continue to be invoked after a determination can be made.  That is a no-no in my book.

4) It won't scale, although I understand it is just an example.  The problem is that repeating values are not put into entities. If you have repeating values, put them into entities, especially if there is no source material that reads otherwise.  (My rule is to try to match the source material for the benefit of policy analysts.)

5) Shall I go on?  or maybe I attach a partially-corrected example.   See that attached.

 

In the end, this project could be a good demonstration of a helper calculator project if properly written.

 

This example interview should use some patterns in my mind that I will try to post about in the future.  One pattern is an OPA Scoring Pattern.

 

In the OPA Scoring Pattern, booleans provide pass/fail, but other attributes may need to be added together to pass a threshold.  I see that pattern all the time.  That pattern is actually what made me look at the example tonight. I wanted to double-check my approach and then banged my head against my desk.

 

In the pattern, first you ask some basic pass/fail questions.  In this example, they ask a poorly written question of "Is the person an apprentice, trainee, company director, labourer or trades assistant?"  I didn't change that question, because I wanted the screens to remain unchanged, but...

 

Once the pass/fail questions are asked, then interviews get into more "fuzzy" questions.  In this example, the "fuzzy" question start with "Degree of Control".  I left things as they are, but if I re-wrote the example I would allow for more possible answers from the business.

 

BTW, I also added test cases.  Every Oracle OPA provided example should have test cases, imho.

 

Maybe I had too many beers the last 3 hours???  Thoughts?

 

Disclaimer: I really, really do appreciate OPA and Oracle Staff.  I know them.  They constantly take time off to work with me and they are very clever and nice.  You only have to look at temporal reasoning and the natural language constructs to see how clever they are.  This will probably be my first, last, and only gripe. I am bothered by example projects that are not to the level of quality as the product itself (or the Oracle staff).

 

Can anyone tell I am feeling guilty about posting a negative that will be read by a group of people who are very nice and accommodating?  I should cut back on the beer.  Nah.

Paul Fowler

A challenge...

Опубликовано: Paul Fowler 11.08.2017

I recognize some of these blog posts are advanced.  Some of the problems being solved are very advanced so I can't avoid all the complexity.

 

I would challenge everyone who uses OPA to try to understand the posts well enough to find improvements on these posts.  Reply back with improvements in the comments.

 

If for no other reason, I submit that people who actually understand these posts have skills beyond beginner.  I have 10+ years of OPA now.  I am confident in my own skills.  Reviewing these posts is a good way to test yourself and possibly help contribute to the community.  Everyone benefits as the whole community matures.

 

In the meantime, I may take a break on posting for a little while until I get more feedback.

 

Thank you