Industry
Machine Learning Models
Client
Confidential
Role
Product Mngr, UX Research
Unpacking the black box: making AI-based visa decisions clearer
Visa applications are complex, but AI based decision making can streamline the process. However, applicants and case officers often struggle with one key issue: why was a case approved or denied? Without clear justification, applicants find it difficult to appeal, and case managers face challenges in verifying decisions. To address this, we designed a case management system that provides transparency into AI based visa decisions and recommendations, ensuring officers can better understand recommendations and applicants receive clear feedback. This case study is based on a real-world explainability project but has been adapted to a different industry to protect confidentiality. While the context differs, the main challenge of making automated decisions understandable remains the same.
Goals
Improve visibility
Increase transparency and visibility into AI based visa decisions.
Reduce decision insights inquiries by 50%
Supporting case managers with better insights to verify or override AI recommendations, so that there is no need for them to raise support tickets to clarify decision reasons.
Enhance trust in the AI model
Enhancing trust in AI by providing insights into the decisioning process.
Approach: understanding the problem through quanititative and qualitative research
In my research approach I combined data analysis, user interviews, and process mapping to uncover key challenges in visa decision transparency. Internal data from applicant inquiries and support tickets provided quantitative insights into where users faced confusion. Afterwards I conducted interviews with case managers and applicant support teams to understand how they interacted with the system and where additional explanation was needed. Mapping existing workflows allowed us to pinpoint critical decision moments and identify gaps in how AI recommendations were presented. These methods helped us define the core usability and communication challenges that needed to be addressed.
One of the biggest obstacles was that previous attempts at improving transparency had failed due to the complexity of AI models and reluctance to introduce changes. To move forward, we worked directly with the data science team to:
Themes
Break down AI outputs into structured decision factors
Instead of a single opaque verdict, we categorized reasoning into digestible, explainable components (e.g., "High-Risk Travel Pattern" or "Incomplete Documentation")
Introduce a two layer explanation system
A high-level summary provided a quick overview, while a deep dive section detailed the exact risk factors or supporting data that influenced the decision.
Ensure human oversight remained a key factor
AI provided recommendations, but officers could still provide feedback and have the tools to review, override, or request further details.