Conducting AI Impact Assessments Before Deployment: The Agency Methodology
A healthcare AI agency deployed a patient triage model at a regional hospital network. The model prioritized patients based on predicted acuity, routing them to appropriate care levels. It worked well in the emergency department โ until someone noticed that elderly patients living alone were consistently undertriaged. The model had learned that patients with frequent prior visits were lower acuity (since frequent visitors often had chronic but manageable conditions), but for isolated elderly patients, frequent visits actually indicated deteriorating health with no home support system. Two patients experienced delayed care for serious conditions before the pattern was caught. A proper impact assessment before deployment would have identified this population-specific risk by examining how the model's logic interacted with different patient demographics and living situations.
AI impact assessments are the structured process of evaluating how an AI system will affect the people, communities, and environments it touches. They go beyond technical evaluation (does the model work?) to examine societal implications (what happens when this model is part of the real world?). For agencies, impact assessments are both a governance best practice and an increasingly common regulatory requirement. And they are one of the most effective tools you have for preventing the kind of post-deployment disasters that damage clients, communities, and your agency's reputation.
What an AI Impact Assessment Actually Is
An AI impact assessment is a systematic evaluation of the potential effects of an AI system on individuals, groups, and society. It examines both intended effects (the benefits the system is designed to provide) and unintended effects (the harms it might cause as side effects or through misuse).
Impact assessments have a long history in other domains. Environmental impact assessments have been required for major construction projects for decades. Privacy impact assessments are standard practice for data processing activities under GDPR and other privacy frameworks. AI impact assessments apply the same logic to artificial intelligence systems.
What an impact assessment is not:
- It is not a technical evaluation of model performance (that's model validation)
- It is not a security assessment (though security is one consideration)
- It is not a fairness audit (though fairness is one consideration)
- It is not a risk register (though it informs risk management)
An impact assessment is broader than any of these individual evaluations. It considers the full range of effects that the AI system could have in its deployment context, including effects that technical metrics don't capture.
Why Agencies Should Lead Impact Assessments
Some agencies assume that impact assessments are the client's responsibility. After all, the client is the one deploying the system and operating in the regulated environment. But agencies are uniquely positioned to lead or co-lead impact assessments for several reasons.
You understand the technology. Impact assessments require technical knowledge about how the AI system works, what its failure modes are, and how it might behave in edge cases. Your agency has that knowledge; your client may not.
You have cross-project perspective. Your agency has seen how AI systems behave across multiple deployments and industries. You can anticipate risks that the client, who may be deploying AI for the first time, wouldn't recognize.
It's a value-added service. Offering impact assessments positions your agency as a governance-aware partner rather than a code shop. Enterprise clients value this capability and will pay for it.
It protects you. If you deliver an AI system that causes harm and you didn't conduct or recommend an impact assessment, you bear some responsibility for the oversight. If you conducted a thorough assessment and communicated the risks, you've demonstrated due diligence.
The Impact Assessment Framework
Our framework organizes the assessment into seven domains. For each domain, you identify potential impacts, assess their likelihood and severity, and propose mitigation measures.
Domain 1: Individual Rights and Autonomy
How does the AI system affect the rights and autonomy of individual people?
Key questions to answer:
- Does the system make or influence decisions about individuals? If so, what kind of decisions?
- Can individuals understand why the system made a particular decision about them?
- Can individuals challenge or appeal the system's decisions?
- Does the system respect individuals' right to privacy?
- Does the system affect individuals' ability to make free and informed choices?
- Does the system treat individuals with dignity and respect?
Common impacts in this domain:
- Automated decisions that individuals can't understand or challenge
- Loss of privacy through data collection, profiling, or surveillance
- Manipulation of behavior through persuasive or addictive design
- Erosion of agency when individuals are forced to interact with automated systems without alternatives
For agencies, this domain requires:
- Documenting the types of decisions the system makes and their consequences for individuals
- Evaluating the system's explainability and recommending appropriate explanation mechanisms
- Designing appeal and override processes for consequential decisions
- Assessing privacy implications beyond what a standard privacy impact assessment covers
Domain 2: Fairness and Non-Discrimination
How does the AI system affect different groups of people, and does it create or amplify inequalities?
Key questions to answer:
- Which groups of people are affected by the system, and how?
- Does the system perform differently for different demographic groups?
- Could the system create or reinforce stereotypes?
- Does the system provide equitable access and outcomes across populations?
- Are there groups that are excluded from the system's benefits or disproportionately exposed to its risks?
Common impacts in this domain:
- Disparate accuracy or error rates across demographic groups
- Exclusion of populations that don't fit the system's assumptions (e.g., non-English speakers, people with disabilities)
- Reinforcement of historical inequalities through training on biased data
- Creation of feedback loops that amplify initial disparities over time
For agencies, this domain requires:
- Comprehensive fairness testing across relevant dimensions (see our fairness metrics guide)
- Analysis of potential feedback loops and amplification effects
- Assessment of accessibility for diverse user populations
- Identification of populations that may be excluded or underserved
Domain 3: Safety and Security
Could the AI system cause physical harm, and is it secure against malicious use?
Key questions to answer:
- Could the system's outputs lead to physical harm to any person?
- What happens when the system fails? Are failures safe or dangerous?
- Is the system resilient to adversarial manipulation?
- Could the system be weaponized or misused for harmful purposes?
- What are the cybersecurity implications of the system?
Common impacts in this domain:
- Physical safety risks in systems that control machines, vehicles, or medical equipment
- Psychological harm from systems that generate content, make recommendations, or moderate communication
- Security vulnerabilities that could be exploited for data theft, fraud, or sabotage
- Dual-use risks where a benign system could be repurposed for harmful applications
Domain 4: Transparency and Accountability
Can stakeholders understand the system, and is there clear accountability for its behavior?
Key questions to answer:
- Do affected individuals know they are interacting with an AI system?
- Can the system's decision-making process be explained to relevant stakeholders?
- Is there clear accountability for the system's outputs and their consequences?
- Are there adequate oversight mechanisms in place?
- Is the system's behavior auditable?
Common impacts in this domain:
- Lack of awareness that AI is involved in decision-making
- Inability to explain decisions to affected individuals, regulators, or courts
- Diffuse accountability where no single party takes responsibility for outcomes
- Insufficient audit trails that prevent meaningful oversight
Domain 5: Labor and Economic Impact
How does the AI system affect workers and economic structures?
Key questions to answer:
- Will the system displace workers or change job roles?
- How will affected workers be supported through the transition?
- Does the system create new economic opportunities or concentrate existing ones?
- Does the system affect working conditions, supervision, or performance evaluation?
- Are the economic benefits of the system distributed fairly?
Common impacts in this domain:
- Job displacement without adequate retraining or transition support
- Increased surveillance and micromanagement of workers
- Concentration of economic benefits among those who already have advantages
- Degradation of job quality through deskilling or excessive automation
Domain 6: Social and Democratic Impact
How does the AI system affect social relationships, communities, and democratic processes?
Key questions to answer:
- Does the system affect how people interact with each other?
- Could the system be used to manipulate public opinion or democratic processes?
- Does the system affect community cohesion or social trust?
- Does the system concentrate power or enable abuse of power?
Common impacts in this domain:
- Filter bubbles and echo chambers that reduce exposure to diverse perspectives
- Manipulation of information environments through personalized content
- Erosion of trust in institutions when AI decisions are perceived as opaque or unfair
- Concentration of power in entities that control AI systems and the data they depend on
Domain 7: Environmental Impact
What are the environmental costs and benefits of the AI system?
Key questions to answer:
- What are the energy requirements for training and operating the system?
- What are the carbon emissions associated with the system's infrastructure?
- Does the system contribute to or mitigate environmental problems?
- Are there more environmentally sustainable alternatives that achieve similar outcomes?
Common impacts in this domain:
- Energy consumption from training large models and running inference at scale
- Electronic waste from hardware used to support AI systems
- Environmental benefits when AI is used to optimize resource usage, reduce waste, or monitor environmental conditions
Conducting the Assessment: A Practical Process
Phase 1: Scoping (1-2 days)
Define the boundaries of the assessment.
- Describe the AI system and its intended use in detail
- Identify the deployment context, including the organization, industry, and geography
- List all stakeholders who are affected by or involved in the system
- Determine which impact domains are relevant (not all seven will apply to every project)
- Set the timeline and assign responsibilities for the assessment
Phase 2: Research and Analysis (3-5 days)
Gather information and analyze potential impacts.
- Interview stakeholders from multiple perspectives (developers, deployers, affected individuals, domain experts)
- Review similar deployments and their outcomes
- Analyze the system's technical characteristics (training data, model behavior, failure modes) for implications in each impact domain
- Identify potential impacts, both positive and negative
- Assess the likelihood and severity of each negative impact
Phase 3: Mitigation Planning (2-3 days)
Develop strategies to address identified risks.
- For each significant negative impact, propose one or more mitigation measures
- Evaluate the feasibility and effectiveness of each measure
- Identify residual risks that cannot be fully mitigated
- Develop monitoring plans to detect impacts that emerge after deployment
- Establish response procedures for impacts that materialize
Phase 4: Stakeholder Engagement (1-2 days)
Share the assessment with affected stakeholders and incorporate their input.
- Present findings to the client's leadership, legal, and compliance teams
- Where appropriate, engage representatives of affected communities
- Incorporate feedback into the assessment
- Document stakeholder input and any disagreements
Phase 5: Documentation and Recommendation (1-2 days)
Compile the assessment into a formal report.
- Executive summary with key findings and recommendations
- Detailed analysis for each impact domain
- Mitigation plan with owners, timelines, and success criteria
- Monitoring plan for post-deployment impact tracking
- Recommendation on whether to proceed, proceed with conditions, or halt
Scaling Impact Assessments Across Your Agency
As you conduct more impact assessments, you'll develop efficiencies that reduce the time and effort required.
Build impact assessment templates for your most common project types. A template for a customer churn model will have different default considerations than a template for a content moderation system.
Create a risk catalog that documents the impacts you've identified across projects. Over time, this catalog becomes a reference that helps your team identify relevant impacts more quickly.
Develop standard mitigation playbooks for common impacts. If you frequently encounter disparate accuracy risks, document the standard mitigation approaches your agency uses and their effectiveness.
Train your team so that every project lead can conduct a basic impact assessment. Reserve deep assessments for high-risk projects, where your most experienced team members lead the process.
Automate where possible. Some aspects of impact assessment, such as fairness testing and performance disaggregation, can be automated. Build these into your standard pipeline so they're available as inputs to every assessment.
Your Next Steps
This week: Select a current or recent project and conduct a rapid impact assessment using the seven-domain framework. Allow yourself 4-6 hours. The goal is to practice the methodology and identify gaps in your current approach.
This month: Create an impact assessment template for your most common project type. Include the seven domains, pre-populated with common impacts and standard mitigation measures.
This quarter: Implement impact assessments as a standard part of your project lifecycle. Require assessments for all projects that meet your defined risk threshold. Track the outcomes of assessed projects to evaluate the assessment's effectiveness.
AI impact assessments are not bureaucratic overhead. They are a practical tool for identifying risks that technical evaluations miss, for protecting your clients and their stakeholders, and for demonstrating the kind of governance maturity that enterprise buyers increasingly demand. Build the capability now, and you will deliver better AI systems โ systems that work not just technically, but for the people they affect.