Categories
- All Categories
- Oracle Analytics and AI Learning Hub
- 40 Oracle Analytics and AI Sharing Center
- 20 Oracle Analytics and AI Lounge
- 277 Oracle Analytics and AI News
- 50 Oracle Analytics and AI Videos
- 16.2K Oracle Analytics and AI Forums
- 6.4K Oracle Analytics and AI Idea Labs
- Oracle Analytics and AI User Groups
- 103 Oracle Analytics and AI Trainings
- 18 Oracle Analytics and AI Challenge
- Find Partners
- For Partners
OCI AI Vision: Human-in-the-Loop Feedback Interface for Custom Models
Problem
OCI AI Vision Custom Models allow users to train vision models using labelled datasets created via the OCI Data Labelling Service. Once trained and deployed, these models can be used to perform inference on images, returning predictions such as object classifications or detections along with confidence scores.
However, once models are trained and in production, there is currently no native mechanism for users to provide feedback on inference results. This limits the ability to:
- Validate model performance in real-world conditions
- Capture misclassifications or edge cases
- Continuously improve model accuracy in a governed, auditable manner
Proposed Solution
Introduce a lightweight Vision Feedback Interface, which allows users to review model inference outputs and provide structured feedback directly from OCI AI Vision.
For each inference result, authorized users can:
- Accept prediction: Confirm the model output is correct.
- Correct label: Amend the prediction (e.g. redraw bounding box).
- Mark as ambiguous / out-of-scope: Flag images that represent edge cases.
The interface could even be similar to the Data Labelling Service UI, but initialized from model outputs rather than raw datasets.