STAGING

Transparency: Explaining Decisions Clearly

AI systems must be transparent in how they function and the decisions they support. Care workers and service users must understand, having received appropriate training, how AI is being used and why certain decisions or recommendations are being made. By integrating explainability, clear responsibility, and disclosures about limitations and biases the AI system can avoid becoming a “black box” and foster greater trust and accountability among users and care workers. There needs to be honest and transparent use.

The FAIR model’s second stage—Analysis of rights—asks us to consider how decisions are made and communicated, which aligns closely with the principle of transparency. In social care, both workers and those receiving care have a right to understand how AI is being used, what decisions it is influencing, and why.

Case Example:

A local authority introduced an AI system to help assess the level of home care hours an individual might need based on their health data. Initially, care workers and service users felt anxious about how decisions were being made—they had little insight into what factors were being considered. Through a transparent process of sharing information, including open consultations and accessible documentation, the AI’s decision-making process was explained clearly. This gave everyone involved confidence that the system was working fairly and could be challenged if necessary. By honouring transparency, we respect people’s right to be involved in decisions about their own care.