Industry

Machine Learning Models

Client

Confidential

Role

Product Mngr, UX Research

Unpacking the black box: making AI-based visa decisions clearer

Visa applications are complex, but AI based decision making can streamline the process. However, applicants and case officers often struggle with one key issue: why was a case approved or denied? Without clear justification, applicants find it difficult to appeal, and case managers face challenges in verifying decisions. To address this, we designed a case management system that provides transparency into AI based visa decisions and recommendations, ensuring officers can better understand recommendations and applicants receive clear feedback. This case study is based on a real-world explainability project but has been adapted to a different industry to protect confidentiality. While the context differs, the main challenge of making automated decisions understandable remains the same.

Goals

Improve visibility

Increase transparency and visibility into AI based visa decisions.

Reduce decision insights inquiries by 50%

Supporting case managers with better insights to verify or override AI recommendations, so that there is no need for them to raise support tickets to clarify decision reasons.

Enhance trust in the AI model

Enhancing trust in AI by providing insights into the decisioning process.

Approach: understanding the problem through quanititative and qualitative research

In my research approach I combined data analysis, user interviews, and process mapping to uncover key challenges in visa decision transparency. Internal data from applicant inquiries and support tickets provided quantitative insights into where users faced confusion. Afterwards I conducted interviews with case managers and applicant support teams to understand how they interacted with the system and where additional explanation was needed. Mapping existing workflows allowed us to pinpoint critical decision moments and identify gaps in how AI recommendations were presented. These methods helped us define the core usability and communication challenges that needed to be addressed.

Solving the data challenge: making AI decisions readable

Solving the data challenge: making AI decisions readable

Solving the data challenge: making AI decisions readable

One of the biggest obstacles was that previous attempts at improving transparency had failed due to the complexity of AI models and reluctance to introduce changes. To move forward, we worked directly with the data science team to:

Themes

Break down AI outputs into structured decision factors

Instead of a single opaque verdict, we categorized reasoning into digestible, explainable components (e.g., "High-Risk Travel Pattern" or "Incomplete Documentation")

Introduce a two layer explanation system

A high-level summary provided a quick overview, while a deep dive section detailed the exact risk factors or supporting data that influenced the decision.

Ensure human oversight remained a key factor

AI provided recommendations, but officers could still provide feedback and have the tools to review, override, or request further details.

Design and iteration: bringing insights in front of our users

With the information architecture and data foundation in place, we turned to the user experience. The existing system presented visa outcomes as Approved or Declined with little context and mainly raw data. We needed a way to display key decision factors without overwhelming users.

Introducing visual decision summary

We replaced the simple approval and rejection label with a decision breakdown chart, highlighting the top factors influencing the outcome. The challenge was to choose a visualization that could work well with the reason categories. These categories were dynamic and could vary per case. In addition, the impact could be widely distributed, which required us to present more than 5 reason categories as originally anticipated.

Decision reason narrative

A new "Learn more" functionality provided officers with AI-generated reasoning in plain language, allowing them to review supporting data at a glance.

Balancing AI and human decision-making

To make sure that the case managers can still effectively provide feedback and perform overrides, we made sure that calls to action were highly visible. In addition, an activity log was added to provide further context into what happened to each case.

Finalising the case dashboard and case detail screen

Once the core concepts were validated, we moved to high-fidelity design, focusing on two critical areas: the case dashboard and the case detail screen. Before the full rollout, beta testing was conducted with a select group of case officers to ensure the system met usability expectations. Feedback from this phase led to refinements, such as improved decision summaries, clearer visual hierarchies, and adjustments to how overrides were displayed.

Outcome and impact: a system users can trust

Similar implementations in other industries have shown that providing structured decision insights leads to faster verification, reduced inquiries, and increased trust in AI powered decision making. By integrating explainability into the system, case managers can work more efficiently, and applicants gain better visibility into their decisions. Early testing suggested that these improvements helped reduce confusion, streamline case reviews, and improve confidence in automated recommendations. This project reinforced the importance of bridging AI and human oversight to create fair, transparent, and actionable decision-making systems.