We help organizations implement responsible AI practices through governance frameworks, independent auditing, and committee advisory. Our work is grounded in hands-on experience overseeing the development of the first 'Fairly Trained' certified LLM.
Our Certified AI Auditor credential — earned in the first cohort in 2022, before ChatGPT launched — combined with deep technical expertise in building AI systems, enables us to evaluate AI risk from both governance and engineering perspectives.
Credential-Backed Expertise
Our AI governance practice draws on certifications in AI auditing, accounting, and privacy — including one of the first Certified AI Auditor credentials awarded. Combined with a substantial research record in AI systems and computational law, this allows us to evaluate AI risk from both governance and engineering perspectives, not just check compliance boxes.
What We Offer
AI Governance Framework Development
Design and implement governance structures for responsible AI adoption.
Independent AI Auditing
Third-party evaluation of AI systems for bias, risk, and compliance.
AI Ethics Committee Advisory
Guidance on forming and operating effective AI oversight committees.
Responsible AI Policy Development
Organizational policies for AI development, procurement, and deployment.
AI Risk Assessment
Systematic evaluation of AI-related risks across the organization.
Fairly Trained Certification Support
Guidance on copyright-clean training data practices and certification.
Governance in Practice
Our governance work is informed by hands-on experience building and overseeing AI systems, not just advising on them. Examples from our team's published work and operational leadership:
AI Lifecycle & Board Oversight
Published guidance on the three-stage AI lifecycle (data collection, training, deployment) and the board's governance role at each stage — from data provenance assessment to post-deployment monitoring.
EU AI Code of Practice Analysis
Board-level analysis of the EU AI Office's General-Purpose AI Code of Practice, covering systemic risk oversight, documentation requirements, incident response, and compliance resource planning.
AI Risk Management for Boards
Practical frameworks for board-level AI risk management: establishing risk appetite, assessment methodologies, treatment strategies, and continuous monitoring and reporting structures.
Data Provenance at Scale
Direct experience implementing responsible data practices at scale — 132M+ documents with verified copyright status, demonstrating how governance can be built into AI data pipelines from the ground up.
Frequently Asked Questions
- What is AI auditing and why does it matter?
- AI auditing is the systematic evaluation of AI systems for bias, risk, fairness, and compliance. As organizations deploy AI in high-stakes decisions — hiring, lending, healthcare, legal — independent auditing provides assurance that these systems operate as intended and meet regulatory requirements. Our Certified AI Auditor credential, earned in the first cohort before ChatGPT launched, demonstrates deep expertise in this emerging discipline.
- What AI governance frameworks do you work with?
- We work with a range of frameworks including NIST AI Risk Management Framework, ISO/IEC 42001, the EU AI Act requirements, and custom organizational governance structures. Our approach is framework-agnostic — we help organizations select and implement the frameworks most appropriate for their regulatory environment and risk profile.
- What does 'Fairly Trained' certification mean?
- Fairly Trained is a certification for AI models trained on data that respects copyright and intellectual property rights. Our team oversaw the development of the first LLM to receive this certification, using 132M+ copyright-clean documents. We can help organizations understand and implement similar responsible data practices.
- How do AI governance engagements typically begin?
- We start with a complimentary consultation to understand your organization's AI usage, governance maturity, and specific concerns. From there, we typically recommend either a focused AI risk assessment or a broader governance framework development engagement, depending on your needs.
Related Services
Related Insights
An analysis of the EU AI Office's draft General-Purpose AI Code of Practice, highlighting systemic risk oversight, documentation requirements, incident response, and board-level compliance considerations.
AI Lifecycle and the Board's RoleA primer for board directors on the AI lifecycle — data collection, training, and deployment — and the strategic considerations boards must understand for effective AI oversight.
Risk Management for AI: A Board Director's GuideA comprehensive guide for board directors on leading AI risk management through six key elements: establishing context, risk assessment, risk treatment, recording, communication, and continuous monitoring.
AI Oversight: 5 Key Sources of Board RequirementsA framework identifying the five key sources of AI governance requirements for boards — legal mandates, risk frameworks, insurance, internal policies, and customer preferences.
How Data Provenance Drives Machine Learning Risk and ValueAn exploration of data provenance — the origin and history of data — and why it is a board-level concern for AI risk management, legal compliance, and responsible governance.
Pioneering Responsible Data Science: A Framework for Ethical InnovationIntroducing an open-source Responsible Data Science Policy Framework designed to help organizations address ethical AI governance through a modular, adaptable approach.