Deployment Phase Activity Booklet (text only)
Published 5 June 2025
These materials were produced by The Alan Turing Institute through an extended engagement with personnel at the UK Ministry of Justice.
Ethics Process
Introduction to the SAFE-D Framework
- Read the Introduction Booklet and familiarise yourself with the Project Lifecycle Model.
SAFE-D Principles Booklet
- Read the SAFE-D Principles Booklet and familiarise yourself with the principles and their core attributes.
Design Phase Activity Booklet
- SAFE-D Identification Workshop Exercise
- Litmus Test
- Stakeholder Engagement Worksheet
- Additional activities where relevant
Development Phase Activity Booklet
- SAFE-D Reflection Workshop Exercise
- Development Phase Questionnaire
- Stakeholder Engagement Worksheet
- Additional activities where relevant
Deployment Phase Activity Booklet
- SAFE-D Assurance Workshop Exercise
- Deployment Phase Questionnaire
- Stakeholder Engagement Worksheet
- Additional activities where relevant
System Deployment Phase
System Deployment:
Focuses on the safe implementation and use of the system, including ongoing monitoring and updates.
Lower-level lifecycle stages and their ethical significance
System Design & Implementation:
This process involves putting a model into action and integrating it into an operational setting, allowing users to interact with it.
Ethical significance:
The effectiveness of the model depends on how well the system is implemented.
There are two key types of implementation:
- Technical Implementation: This involves creating the hardware and software needed to support the model (like servers and interfaces). It’s crucial to ensure the system is secure, efficient, and accessible
- Social or Organisational Implementation: This focuses on how the technical system fits within wider social and organisational practices, ensuring that users are informed and the system aligns with existing practices
User Training:
is refers to any support or skill-building provided to individuals or groups who need to operate a data-driven technology, especially in safety-critical situations.
Ethical significance:
- User training is usually not done by the same team that created the system. While developers might provide documentation, it’s often not enough; more formal training sessions may be needed, especially for complex systems.
- Poor training can lead to issues like algorithmic aversion, where users distrust a reliable system or trust an unreliable one.
System Use & Monitoring:
Typically, metrics and evaluation methods are used to track the system’s performance, ensuring it maintains or improves upon its initial performance.
Ethical significance:
- Depending on its design, an AI system can allow for continuous feedback and learning when deployed in a physical or virtual environment (like robotic systems using reinforcement learning or digital twins linked to real counterparts).
- Because machine learning models and AI systems can behave dynamically and unpredictably, ongoing monitoring and feedback are crucial to detect issues like model drift that could harm individuals or groups.
Model Updating & Decommissioning:
If monitoring a model or system reveals vulnerabilities or poor performance, it may be necessary to update the model through retraining or to remove the system if it no longer works effectively.
Ethical significance:
- An algorithmic model that changes over time might need updates or removal from use. While improvements to the system’s infrastructure (like speed or security) are important, the key focus should be on the model itself (such as its parameters and features).
- A significant issue to consider is model drift, which can happen when the data used to train the model changes (like fluctuating house prices) or when the meaning of features shifts due to changes in societal practices or norms.
Development Phase SAFE-D Reflection
Workshop Exercise: SAFE-D Deployment Checkpoint
Activity Overview
Now that you’re in the System Deployment Phase, it’s important to reflect on the SAFE-D Principles could be impacted as you get ready to deploy your system. Use the following steps and prompts to guide your discussion and documentation.
Steps:
1) Review previous work
- Look back at the answers and action strategies you developed during previous activities to refresh your memory
2) Use the Miro board
- Track any changes, progress, and new questions that have arisen in the Deployment Phase
- You can create a new section on the board, or add post-its in a different colour to your original answers
3) Evaluate SAFE-D goals & objectives
- Revisit the objectives you set during the SAFE-D Specification exercise. Assess whether these strategies are working effectively or if they need adjustments
Prompt questions for discussion:
- What changes have occurred since the Development Phase that impact the SAFE-D Principles?
- How has the project progressed in relation to the ethical principles? Are there any areas where you feel you have made significant progress?
- Have any new questions or concerns emerged regarding the SAFE-D Principles as the project develops?
- Are the strategies you implemented in the Design Phase proving effective? What challenges have you encountered?
- Have you received any feedback from stakeholders that may influence your assessment of the SAFE-D Principles?
Documentation
Make sure to document your reflections and discussions in a clear format. This will not only provide a record of your progress but also facilitate ongoing communication among team members and stakeholders.
Importance of Reflection
Reflecting on the SAFE-D Principles at this stage is crucial for maintaining ethical integrity throughout the project. By actively assessing your project’s alignment with these principles, you can identify potential risks and ensure that ethical considerations are prioritised as you move forward.
Deployment Phase Questionnaire
The Deployment Phase Questionnaire is the next formal checkpoint in the ethics process.
Steps:
Use previous insights
- Leverage the insights and discussions from your previous SAFE-D workshops and assessments to inform your responses in the assessment
Assign team members
- Consider nominating different team members to collaborate on various sections of the assessment. This allows for diverse input and thorough exploration of each principle
Complete the questionnaire
- The Deployment Phase Questionnaire consists of a series of questions organised around the SAFE-D principles. Each principle has 3-4 questions that require a simple Yes/No answer, along with justification or evidence
Justification and evidence
- Ensure that for each answer, you provide sufficient justification or evidence. This could include references to workshop insights, project documentation, or stakeholder feedback
Track changes and decisions
- Remember that the purpose of this assessment is not to hinder innovation but to help identify and mitigate risks early in the project lifecycle
Document the assessment
- After completing the assessment, compile and document the responses. This will serve as a useful reference for future stages of the project and facilitate transparency with stakeholders
Sustainability
3.1a: Is the team confident in its ability to monitor the model’s performance in case of any external changes? [Robustness]
3.1b: Are there monitoring mechanisms in place for all target objectives, including ethical objectives? [Safety]
3.1c: Is the system designed in a way that is at a suitable level of security, performance and accessibility for the intended usage? [Security, accuracy & performance]
Accountability
3.2a: Have any gaps in the human feedback mechanism for errors been recorded? (e.g. can high cost errors missed by humans be caught)? [Traceability]
3.2b: Is there an accountable party in charge of monitoring for any changes in key performance indicators? [Answerability]
3.2c: Is there an escalation and intervention process in place if key indicators fall outside of a pre-defined threshold? [Auditability]
3.2d: Is there appropriate support or upskilling available where required for users of the system? [Accessibility]
Fairness
3.3a: If there is a feedback loop, is it free from reinforcing any existing biases? Is there any user interaction with the output? [Bias mitigation]
3.3b: Is the model being monitored for any shifts in demographic characteristics? [Equality]
3.3c: Is performance variation between demographic groups being tracked? [Non-discrimination]
Explainability
3.4a: After deployment, can explanations be produced as required by both global and local levels? [Interpretability]
3.4b: Are the explanations stable over model re-training cycles? [Responsible model selection]
3.4c: Are explanations being reviewed before sharing with appropriate stakeholders? [Implementation & user training]
Data Responsibility
3.5a: Are appropriate measures in place to protect personal and sensitive data in live environments? [Responsible data management, legal & organisational compliance]
3.5b: Are data quality issues in any incoming datasets being monitoring for potential impact on the model? [Adequacy of quantity & quality]
3.5c: Are there regular refreshes to legal and compliance assessments? [Legal & organisational compliance]
3.5d: Is there a provision for taking the model offline or de-commissioning it, and a fall-back plan if the model is failing to meet its requirements? [Responsible data management]