From Tacit Knowledge to Governed Delivery

Education Technology Provider

Industry

Education

Service

Product Strategic Innovation

Summary

  • Takeaway

    Faster Delivery

    Teams ship AI features with higher velocity because success is clearly defined from day one, improving speed-to-value.

  • Takeaway

    Reduced Risk

    Standardized patterns for prompt design and bias checks eliminate siloed knowledge and late-stage rework, reducing total cost.

  • Takeaway

    Audit Readiness

    A centralized AI inventory provides leadership with visibility and close alignment with the EU AI Act and NIST frameworks, reducing risk.

Mm Medium

How one EdTech platform scaled responsible AI across enterprise product teams.

 

Delivery teams that are geeked about governance.  Who knew?

It's possible when it helps them ship faster and goes from being a checkpoint to a capability. Teams ship AI features faster because they know what good looks like, not despite governance, but because of it.

Business Objective 

Turning AI principles into product-ready governance 

A leading digital learning platform was rapidly expanding its use of AI to improve learning outcomes and learner experiences. AI-powered features were emerging across multiple product lines, but teams lacked a consistent, scalable way to build responsibly and efficiently in line with emerging governance expectations. 

This leading digital learning platform provider partnered with 8th Light to design and implement a governance model that could keep pace with rapid innovation while supporting teams across product, engineering, data, and operations. Our goal was to turn abstract AI principles into a day-to-day delivery infrastructure that product teams could actually use.

 

The Challenge 

A widening translation gap between principles and Tuesday-afternoon decisions 

AI innovation was accelerating across the platform, but teams did not share a common definition of what "good" looked like for this organization. They had high-level frameworks and principles to reference—such as NIST, the EU AI Act, and internal AI policies—but lacked a translation layer for day-to-day work. 

Product and engineering teams were: 

  • Recreating their own processes and checklists for AI features.
  • Working from different sources of information about risks and expectations.
  • Relying on informal, person-dependent knowledge about what "responsible AI" meant when writing prompts, choosing data, or integrating models. 

 

This gap between organizational intent and team-level action had become the real constraint on safe, scalable AI. 

 

Leaders wanted to: 

  • Provide clear expectations for teams building AI-enabled features.
  • Improve consistency and reduce risk across a growing set of AI use cases.
  • Establish a foundation for audits, compliance efforts, and future automation.
  • Close the gap between high-level frameworks and day-to-day product delivery, without slowing the roadmap.

Given the risk involved with AI right now, we can't afford not to be a bit more prescriptive and a bit more formal in how we're approaching this.

Member of the Executive Leadership Team

Our Approach 

Embedding AI governance into product workflows 

8th Light partnered with stakeholders across product, engineering, data, security, and operations to understand current practices and identify the friction slowing responsible AI adoption. Together, we designed the Embedded AI Governance & Risk Management (EAGRM) Framework.

This custom AI governance framework was delivered through a structured four-phase program over roughly eight weeks. 

 

Phase 1 — Discovery and Strategic Alignment

We synthesized the current state using existing artifacts (such as SOC 2 materials) and stakeholder interviews, producing a current-state assessment and stakeholder map to clarify objectives, constraints, and decision-makers. 

Phase 2 — Governance Framework Design 

We defined an AI risk tiering model, lifecycle checkpoints, and clear decision rights across product, engineering, security, legal, and data. This phase delivered a governance framework, review workflow, and RACI matrix that teams can follow for new and existing AI use cases.  

Phase 3 — Compliance, Privacy, and Risk 

We tailored a compliance gap analysis to align practices with AI Act and NIST expectations, built an AI risk register, and refined incident planning for AI-related issues. Key deliverables included a compliance gap report, an AI risk register, and a privacy and sustainability checklist.  

Phase 4 — Reusable Toolkits and Explainability 

We created audit tools and explainability guidelines based on existing QA and bias controls, including a bias and fairness audit template, explainability toolkit, and transparency metrics for learning experiences.

 

The EAGRM Framework 

At the core of this program is EAGRM, which operationalizes existing governance frameworks by translating principles into team-level practices that fit into normal product workflows. 

EAGRM provides four practical components: 

  • Risk Areas & Roles: Clear definitions of AI risk domains and ownership, giving product, engineering, and risk teams a shared understanding of responsibilities across the AI lifecycle.
  • Business Context Mapping: A shared view of where and how AI is used in the learning journey, including user groups, data sensitivity, and impact on learner outcomes, which drives proportional governance.
  • Reusable Guidelines: Actionable, team-friendly guidance for prompts, data handling, model integration, testing, and monitoring, turning abstract principles into concrete patterns for PMs and engineers.
  • AI Inventory: A lightweight, centralized record of each AI use case, including metadata, risk classification, and governance activities, giving leaders portfolio-level visibility. 

 

Making it real for teams 

To ensure EAGRM was more than just slide decks, 8th Light delivered a unified system that brings best practices, guidelines, and documentation together in a single, easy-to-use experience. Instead of hunting across wikis and spreadsheets, teams can: 

  • Discover relevant guidelines and patterns for their specific AI use case.
  • Record key decisions, risks, and controls in the AI inventory.
  • See which reviews and checks are required based on risk tier and business context. 


Teams now use this system as their starting point when scoping and shipping new AI features, rather than inventing their own approach from scratch.

 

Impact 

Product teams 

Teams now deliver AI features with clarity, consistency, and confidence, backed by clear expectations and reusable guidelines. The framework helps product managers who have never built an AI feature understand how to define, scope, and launch responsibly. Doing so brings compliance and risk thinking into the early stages of implementation, avoiding late-stage surprises and rework. 

"You're formalizing the things that we should be doing when building AI features… Having it written out and formalized so that if a product manager who's never made an AI feature is assigned this thing, it helps give them a framework."  - Product Lead


Engineering and data teams 

Teams build AI features with consistent governance baked into the delivery lifecycle, rather than ad hoc checks and hidden knowledge. They apply standard patterns for data handling, prompt design, evaluation, bias checks, and monitoring, thereby reducing ambiguity and duplication of effort. Teams leverage the AI inventory and risk register to understand dependencies, required controls, and audit evidence. 

"I like the approach because it is helpful for folks who are starting down the implementation path to think about how we should gate its functioning to be in compliance."  - Principal Data Engineer


Leadership and governance functions 

Leaders reduce risk by standardizing AI governance across product lines and business units. They gain visibility into AI governance activities and readiness through the AI inventory, risk register, and review workflows. The organization is now prepared for future compliance and audit requirements with artifacts and processes aligned to frameworks such as NIST and the EU AI Act. The governance foundation supports both rapid experimentation and enterprise accountability, keeping AI innovation aligned with the organization's mission and values. 

"It's useful to have it all in one place." - Head of Compliance

How is your AI governance landscape?

If you are facing similar challenges and opportunities to set AI standards, let's talk. Guardrails are there to help teams move smarter, not just faster. Doing so will set your team up for success.

Spark a conversation >