WARNING: This post has been retired and is retained here only for historical purposes.

Instead, use the more recent post: Oracle Policy Automation - Shared Policy Quality Guidelines

Unfortunately, we have had some past issues with OPA policy quality.  We have had rulesets which are almost unreadable and un-maintainable (looking like poorly written java code), in spite of OPA being natural language.  Now, we try to set expectations ahead of time, especially with  vendors that do not specialize in OPA.  This is leading to more strict guidance on quality.

 

It can only be assumed other Oracle Policy Automation projects around the world are having this problem.  In the spirit of providing one possible modifiable template for measuring quality, here is our draft quality guidelines and checklist for shared policy.  The attached draft word document elaborates this checklist.

 

A few disclaimers:

 

  • This checklist and attached guidelines are in draft form.
  • This checklist (or parts of it) isn't for everyone.
  • This checklist is not approved, connected, endorsed, etc, by Oracle.
  • This checklist has not been finalized by NY and is currently opinionated.
  • The purpose of the checklist was to provide a very high bar for shared OPA policy in a shared library, as opposed to general OPA usage.
  • The checklist and measure is considered "good enough" and "better than nothing" right now, as opposed to perfect. (Perfect is the enemy of good enough.)
  • Attempts have been made to harmonize this checklist with Oracle guidance, but if there is found to be differences, then best to follow Oracle guidance.  Checklists get outdated.

 

We are open to suggestions that improve OPA quality.  Please feel free to provide constructive comments below.

 

Checklist for Compliance

The following checklist is use for compliance to this guideline.  The only requirement for compliance is that all the mandatory requirements be met.  Scoring is utilized as a quality assessment to measure maturity of an OPA implementation.

Scoring is as follow:

0 = Not in use

1 = Partially available and/or partially used by the project

2 = Available and in-use by the project

Quality Check

Analysis

OPA is being used for rule lifecycle management

T / F

Overarching policy outcomes are defined in OPA

T / F

Substantive OPA Policy Rules are reviewed by a lawyer and/or agency policy analyst

T / F

OPA is being used to assist in mining rules from source policy and/or legislation

T / F

OPA is being used for rule discovery and rule verification via analysis of existing data

0 / 1 / 2

OPA is being used to document attributes needed by an application in determining outcomes

0 / 1 / 2

OPA is being used for impact analysis of rules

0 / 1 / 2

OPA production rules are primarily used to determine outcomes defined by Agency policy and/or legislation

T / F

OPA production rules need visibility by the business

T / F

OPA production usage provides decision reports

T / F

OPA production rules provide "temporal reasoning"

0 / 1 / 2

Substantive, procedural, and visibility rules are properly separated

T / F

Traceability is provided from all substantive rules to source material

T / F

Substantive rules are in Natural Language

T / F

Rules are written to be read by non-OPA analysts

0 / 1 / 2

Production rules documents only contain operational rules

T / F

All OPA rulesets have a design document

T / F

OPA rules within a document are "on topic"

0 / 1 / 2

OPA only receives data originating from the rule consumer

0 / 1 / 2

OPA should determine outcomes for "I don't know" inferences

0 / 1 / 2

Each ruleset is translated into a language other than English

0 / 1 / 2

All Microsoft Word rule documents must have a TOC (Table of Contents)

T / F

Booleans attributes are never conclusions in word tables

T / F

Rules should not go deeper than level 3

0 / 1 / 2

Declarations are put in the first Excel worksheet and rules in subsequent sheets

T / F

Excel is used when source material is in a table, to implement rate tables, or there are multiple conclusions from the same conditions

0 / 1 / 2

All attributes must be properly parsable and parsed by OPA

T / F

Production projects can be debugged via the OPA debugger

T / F

Projects redefine "the current date"

T / F

There is not sequencing among policy rules

T / F

All substantive policy conclusions have unit test cases

T / F

Projects plan OPA upgrades once per quarter

T / F

List Items are turned into boolean attributes before using them as conditions

0 / 1 / 2

An ability to regression test with production data has been implemented

0 / 1 / 2

An OPA quality checklist is utilized

0 / 1 / 2

Public names are created where possible

T / F

Public names follow a naming guideline

T / F

Entity's identifying attributes are provided

T / F

Entities and relationships are only created when the rules require them for clarity in dealing with repeating attributes

0 / 1 / 2

Rule text should follow Oracle guidelines for entities, relationships, and attributes

0 / 1 / 2

Design and rule documents should contain description of relevant entities and relationships

0 / 1 / 2

Data saved from OPA can be re-loaded into OPA

0 / 1 / 2

Only the initial rules to determine an outcome should avoid effective dates via temporal logic

0 / 1 / 2

Rate tables should be temporal in Excel

0 / 1 / 2

Rules should not be deleted after they are used in production

0 / 1 / 2

Interviews are created with Accessibility warning level of WCAG 2.0 AA or better

T / F

Interviews have goals that support relevance of collected attributes

T / F

All determinations (including those with interview screens) are available as web services

T / F

OPA "Relevancy" is used for all screens and attribute collection

0 / 1 / 2

Policy determination rules are developed prior to developing interview screens

0 / 1 / 2

All entities, personal attributes, headings, and labels have names substitution

0 / 1 / 2

Attribute text should not be changed on screens

0 / 1 / 2