ISACA’s recent article, "Proven Strategies to Uncover AI Risks and Strengthen Audits", offers a timely and practical framework designed to help auditors evaluate AI systems more effectively. As organisations increasingly embed artificial intelligence into decision making, risk profiles evolve, creating challenges for IT risk functions, internal audit, and control assurance teams that must ensure reliability, explainability, and regulatory compliance. The article distils these complexities into a structured approach that aligns with core audit principles while introducing AI‑specific audit attributes. ISACA
AI systems are no longer niche tools; they are making consequential decisions in finance, operations, and customer engagement. This proliferation amplifies risks related to data integrity, algorithmic bias, explainability, security vulnerabilities, and governance, all of which require robust audit considerations. To address these, ISACA proposes a comprehensive audit framework built around seven key control attributes designed to guide risk and assurance professionals.

1. Data Quality Attribute
The foundation of any trustworthy AI system is high quality data. ISACA emphasises assessing completeness, accuracy, and representativeness of training and operational datasets. Auditors should conduct data profiling, detect potential biases, and validate sources to ensure that downstream decisions are grounded in reliable inputs. Poor data quality can undermine control effectiveness and lead to erroneous AI outputs with broad implications for internal controls and governance processes.
2. Model Validation Attribute
Even the most advanced AI model is only as good as its validation. This attribute focuses on reviewing model development processes, testing against benchmarks, and ensuring that performance is consistent across expected use cases. Independent validation and performance benchmarking help auditors determine whether models behave as intended, an essential step when AI influences risk‑sensitive areas such as financial reporting or compliance monitoring.
3. Drift Monitoring Attribute
AI models can degrade or "drift" over time as real‑world data patterns shift. Effective audit programs need to ensure that mechanisms exist to monitor and respond to performance drift. This includes evaluating retraining schedules, performance logs, and alerting systems that identify when a model's behaviour deviates from expected norms.
4. Explainability Attribute
Explainability is essential for both auditability and stakeholder confidence. AI outputs must be interpretable so auditors can link decisions back to logic and rules. Tools and techniques such as SHAP and LIME can help demystify decision models, enabling auditors to communicate findings clearly and test whether decision logic aligns with governance expectations.
5. Security Resilience Attribute
AI systems face threats from adversarial attacks, unauthorized access, and data manipulation. The audit framework recommends testing encryption, access controls, and penetration testing tailored for AI environments. Continuous monitoring and active threat modeling are necessary to understand how systems resist or recover from attacks.
6. Access and Change Controls Attribute
Effective governance over model and data updates protects against unintended consequences. Auditors should examine access logs, change management procedures, and role‑based policies to ensure only authorised modifications occur. Tight change controls reduce risks that stem from unmanaged updates or misconfigurations.
7. Performance Metrics Attribute
Defined performance benchmarks help auditors evaluate whether an AI model meets reliability and accuracy expectations. Independent testing and deviation analysis ensure that models remain aligned with objectives throughout their lifecycle.
Key Take away
Collectively, these attributes form a holistic audit lens through which IT audit, risk management, and governance teams can assess AI systems, not just for compliance, but for risk‑informed assurance. As the regulatory landscape evolves (e.g., EU AI Act and other standards), such frameworks will be critical to demonstrating due diligence in governance and control testing.
AI will continue to accelerate organisational transformation but without a structured audit approach, risks can outpace governance. Applying ISACA's framework enhances ITGC robustness, improves confidence in AI‑driven processes, and supports audit functions in providing independent, value added assurance. For IT audit leaders and technology risk professionals, now is the moment to embed AI‑risk controls into audit programs, align them with emerging standards, and help their organisations navigate the future with trust and resilience