Skip navigation

Disclaimers: The following is my opinion and I am open to revision.

 

Why not always use AI by itself?

I get this question asked on occasion.  Some bright person will ask "why not have AI learn the legacy system, learn the rules, and take it from there?"

 

  1. Primarily - AI decisions are not easily auditable. (There are ways, but it isn’t a sweet spot.)
  2. AI provides a probabilistic capability not always in conformance to law or intent.
    • AI, for instance may unknowingly violate discrimination laws.
      • AI doesn’t care that federal law may protect from discrimination based on age, race/color, national origin, creed, sex, disability, and predisposing genetic characteristics.
      • AI doesn’t care that state law may protect from discrimination based on sexual orientation, military status, pregnancy-related condition, predisposing genetic characteristics, familial status, marital status, domestic violence victim status, prior arrest record, previous conviction record, and gender identity.
    • If base data shows a bias, AI may continue and may even promote that bias going forward.
  3. AI isn’t programmed, it is “trained” making exact modifications more challenging.  (There are ways, but it isn’t the sweet spot of AI.)

 

Why not always use OPA by itself?

  1. OPA may not adapt quickly to real-time changes in environment that may impact a determination.  (There are ways, but it isn’t a sweet spot.)
  2. OPA doesn’t have in-built capabilities for real-time policy impact analyze through large ever-changing datasets.
  3. OPA’s what-if capability is based on having the rules already defined and making changes to existing rules.  Sometimes the rules are not known up-front.

 

AI and OPA together?

  1. Together they can be a good fit for events and big data problems combined with policy.
  2. Both are trying to make determinations.
    • OPA has traditionally had weaknesses with real-time event and probabilistic inputs to its inferrencing - which AI is good at.
    • Other rules engines, such as drools, etc, historically a better fit for "event driven" rules - AI can fill this gap for OPA.
    • OPA and chatbots don't easily integrate out of the box - AI and chatbots are made for each other.
    • OPA won't analyze photographs, voice, video, files, etc...  - AI will analyze binary evidence itself.
    • OPA doesn't mine data for rules and correlation - AI can mine data and apps.
    • AI has issues with inferring results from policy and/or human intent - OPA is great at inferrence from policy and human intent.
    • AI by itself doesn't always put a "human in the loop" for assistance and is not easily understood by business users - OPA is easily understood by business users.
    • AI conversations are not always meaningful - OPA conversations are meaningful.
  3. There appears to be a symbiotic relationship, where each products weaknesses are fixed by the other products strengths.

 

Neither product takes long to set up.  This isn’t 3+ months of infrastructure deployment if the cloud is utilized…  I assert that the hardest part is getting access to the big data and source policy, not integrating or using the tools.

 

Some currently viable OPA / AI Scenarios

Case 1 – Legacy Rule Discovery

Case 2 – Anomaly detection as part of an interview

Case 3 – Decisions Augmented by Real-Time Big Data Event Analysis

  •   Classifications and Predictions
  •   Feature Extraction
  •   Diagnosis and Troubleshooting

 

(I was going to add a 4th case on AI chatbot with OPA to provide human in the loop, but the architecture is evolving too quickly right now.  I may come back to that in the future.)

 

Case 1 – Legacy Rule Discovery

Assertions:

  • Optimal business determinations take input from at least two groups:
    • Policy / Legislative Analysts
    • Data Scientists
  • OPA / AI provides an analysis intersection for these two groups.
  • “Legacy” can mean migration from a historical system OR a legacy can be a running system where rules need optimization…
  • It is important to understand that data can create its own rules in a running system and these rules may need to be exploited by the business for gain.

Case 1 process:

 

This case 1 is predicated on having legacy data or system and some knowledge of the existing system’s inputs / outputs…

  • First pass uses OPA to implement rules as “expected”.
  • A process using AI follows to refine the results:
    • OPA what-if analysis is used to determine OPA result accuracy from existing data.
    • AI finds correlations and rules to the mismatched results from the existing data.
      • AI determines base data that impacted the outcomes
      • AI determines relationships of base data to outcomes
    • OPA rules are revised with new findings from AI as verified by the business.
  • Process is repeated until OPA rule accuracy at acceptable levels.

 

Note: If the legacy system is available, AI can be utilized to run the legacy system repeatedly and more accurately assist with base data impact and correlation.

 

Final note on Case 1.  In my experience, usually the issue needing both OPA / AI arises when a legacy system is being decommissioned by a different organization than the organization now responsible for the business application.  Replacement of 30+ year old legacy systems may also rise to this level of need.

 

Case 2 – Anomaly detection as part of an interview

Assertions:

  • Optimal business determinations will detect anomalies early.
  • AI provides a near real-time anomaly determination based on examining the data.
  • Dealing with anomalies is a policy question and therefore a good fit for OPA.

 

OPA/AA Intersection is in providing score data from analytics to OPA

 

Case 2 is predicated on having OPA persist attributes to big data store utilized by AI.

  1. OPA provides new attributes to AI system that looks for anomalies.
  2. If AI finds anomalies, such as irregular payment, irregular household compositions, etc, OPA is notified via setting of a base attribute from the AI system.
  3. OPA can either notify the interviewee for verification, notify the interviewing system for follow-up, or simply record this result in an audit field for later analysis.  The rules on what to do with anomalies are OPA human-driven rules.

 

Case 3 – Decisions Augmented by Real-Time Big Data Event Analysis

Assertions:

  • AI can classify / predict probable outcomes when OPA collects incomplete data and feed that back to OPA.  This includes predicting additional programs the client may be interested in.
  • AI can extract features to enhance interviews. For instance AI can evaluate images and large text provided in an interview.
  • AI can diagnose data provided in an interview. AI can help troubleshoot. (Possibly with chatbots.)

 

This is a simpler process than it looks.

  1. AI is put into a real-time feedback loop with OPA.
  2. AI draws from a big-data store to make determinations about current OPA data.
  3. After OPA is done, the OPA data is added to the big data store.

 

What are the needed OPA / AI integrations for the above cases?

Currently, the best / strongest methods I have found are machine learning tools that read database data and write that data back to the database.

It is then OPA's responsibility to provide input to AI through a database, or to query the database for the results.

This type of integration isn't very hard, but it also isn't "business user friendly".  The integrations need to be more obvious during business and data analysis.

 

One of the easiest integrations of AI and OPA is in big data and statistics / probabilities.

I put a related OPA probabilities blog post here:  Logic Puzzle - OPA Determination with Probability

The statistics in the probability blog post example were typed into Excel.  But, a much better source for the statistics would be an AI engine.  That would allow real-time statistics to be used with OPA policy to adjust business outcomes.

 

In the future, we will know that OPA / AI has been well integrated when I can zip up an OPA file that demonstrates both together.  At the moment, I can't zip up the AI material.  (Another reason AI is not prime without OPA.)

 

I hope for more research into the following:

  • Stronger Service Cloud / Siebel integration with AI.
  • Generic OPA connectors to databases
    • Assertion: AI tools built to retrieve and provide data to databases.
    • Assertion: Shared data is current best mechanism to interact with AI
    • Examples (my current preferences):
      • A solution such as Mantis should be utilized or built into OPA by Oracle
      • OPA should have a solution to dynamically create a data store for attributes tagged by modeler.
  • AI integration during policy modeling by the analysts.

 

My final hope is the above blog post creates some discussion / thinking in the community.  I assert that AI and OPA are an excellent fit to combine machine and human determinations.

 

Comment if you have additional thoughts.