Due Diligence for AI Vendors: Lessons from the LAUSD Investigation
third-party-riskai-governancevendor-management

Due Diligence for AI Vendors: Lessons from the LAUSD Investigation

DDaniel Mercer
2026-04-11
18 min read
Advertisement

A practical AI vendor due diligence checklist inspired by the LAUSD investigation: provenance, conflicts, data handling, and contract safeguards.

Due Diligence for AI Vendors: Lessons from the LAUSD Investigation

The Los Angeles Unified School District superintendent investigation is a cautionary tale for every security, procurement, and governance team buying AI. When a public institution becomes entangled with a defunct AI company and investigators start asking about relationships, influence, and decision-making, the lesson is not merely “watch for headlines.” The real lesson is that AI vendor risk is broader than standard software due diligence: it spans governance, model provenance, data handling, conflicts of interest, contracting, and post-award monitoring. For teams building a defensible governance layer for AI tools, this case is a reminder that procurement security is now a first-line control, not a back-office formality.

In practical terms, AI vendor due diligence must answer five questions before a contract is signed: Who really built the model? What data trained it and what data will it ingest? Who benefits from the transaction and are there undisclosed conflicts? What rights, warranties, and audit remedies are in the agreement? And how will the buyer continuously verify the vendor remains compliant after go-live? If your current process is still based on a generic SaaS questionnaire, you are probably missing the most material AI risks. A stronger program borrows from cloud controls, third-party assessment discipline, and secure rollout patterns like those used in cloud-based medical records and other high-trust environments.

Why the LAUSD Case Matters for AI Procurement

Public scrutiny exposes hidden vendor risk

The immediate lesson from the LAUSD situation is that AI vendors can create governance risk even before a breach occurs. A school district, a superintendent, a board, and federal investigators form a reminder that influence and accountability matter as much as code quality. In enterprise settings, the same dynamic appears when an executive champion pushes a tool without cross-functional review, or when a vendor’s relationship network is not disclosed during procurement. That is why vendor risk teams must look beyond security controls and examine decision pathways, influence, and procurement integrity.

When AI is involved, reputational damage often arrives before technical damage. A vendor may be technically competent yet still create unacceptable risk if its ownership structure is opaque, its claims are unverifiable, or its founders have undisclosed ties to decision-makers. In that sense, AI due diligence resembles the logic behind partnering with legal experts for coverage accuracy: if you cannot verify the source and incentives, you cannot trust the output. The same is true for suppliers of AI that will touch regulated data, student records, financial records, or internal code.

Why generic vendor questionnaires fail

Traditional vendor assessments usually focus on SOC 2 status, encryption, incident response, and basic privacy commitments. Those controls still matter, but AI introduces a second layer of uncertainty: model behavior, training lineage, prompt handling, output use rights, and downstream bias or hallucination risk. A vendor can pass a standard security review and still fail a meaningful AI risk review because the model was trained on unlicensed data, the service logs prompts for training by default, or the company cannot explain how the model was sourced. For that reason, teams should treat AI procurement like a hybrid of cloud security review and software supply chain assessment.

The right mindset is closer to how analysts read signals in volatile systems. Just as the article on canary indicators in market flows emphasizes early warning signals, AI vendor diligence should seek early indicators of misalignment: evasive answers, missing provenance documents, weak subcontractor transparency, and inconsistent contract terms. These are not minor procurement annoyances; they are precursors to operational, legal, and reputational failure.

The AI Vendor Due Diligence Framework

1. Verify company identity, ownership, and control

Start with the basics, but go deeper than a business license check. Confirm the legal entity signing the contract, the ultimate beneficial owners, the board composition, and any parent, subsidiary, or special-purpose entities involved in delivery. Ask whether any founders, investors, advisers, or resellers have relationships with your business sponsors, procurement staff, or executives. Conflict discovery is essential because AI deals often involve unusually close collaboration between vendor and buyer during pilots, making informal influence easy to miss.

For security teams, this is comparable to establishing a high-trust service bay before work begins, not after problems appear. In other words, create a controlled intake path like the one described in building a high-trust service bay: every entrant is identified, every tool is logged, and every action is attributable. If the vendor cannot provide a clear ownership chart, disclose material litigation, or identify all affiliates handling your data, that is a procurement stop sign.

2. Demand model provenance and source transparency

Model provenance means you know where the model came from, who trained it, what datasets were used, what licenses apply, and which components are open source, proprietary, or third-party. Ask whether the vendor trained its own model, fine-tuned an open model, wrapped a hosted model from a hyperscaler, or orchestrates multiple models through an API. Each path has different IP, safety, and dependency risks. You should also ask for documentation on foundation model versioning, training dates, evaluation benchmarks, and change logs for all major releases.

This is especially critical because outputs from AI vendors can change silently. A vendor might swap underlying models, update system prompts, or change moderation policies without notifying customers. That is why model provenance should be written into the contract as a notification obligation, not merely a sales promise. Teams that care about content integrity can learn from AI moderation without false positives: controls only work when you know what is being moderated, what changed, and what the acceptable error rate is.

3. Map data flow and handling end to end

AI vendor risk is frequently a data governance problem in disguise. Your due diligence should trace every class of data entering the system: user prompts, file uploads, metadata, logs, telemetry, support transcripts, embeddings, and feedback labels. Determine whether the vendor uses customer data to train its models, whether opt-out is possible, how long data is retained, where data is stored, and whether subcontractors or hosting providers have access. If the vendor cannot produce a data flow diagram, that is a major red flag.

A strong assessment should also validate data classification boundaries. Can the tool ingest personal data, sensitive data, source code, regulated data, or confidential business material? Is redaction applied automatically, and if so, can you test it? Do logs store raw prompts, or only tokenized metadata? This level of precision matters because AI systems can unintentionally retain or expose far more than teams realize. A good analog is the discipline behind data standards in forecasting: the quality and structure of inputs govern the quality and safety of outputs.

What Security Teams Must Ask Before Signing

IP, licensing, and output ownership questions

AI vendors often make broad claims about ownership while leaving important exceptions buried in terms of service. Buyers must ask who owns the inputs, who owns the outputs, what rights the vendor claims over feedback, and whether customer data is used to improve the product or train shared models. If the vendor relies on third-party models or datasets, request proof of license compatibility and indemnity coverage for IP infringement claims. You should also ask whether generated content can be used internally without downstream ownership disputes or attribution burdens.

These questions are not theoretical. Companies adopting AI for marketing, education, or operations often discover too late that model outputs are non-exclusive, derivative, or governed by usage restrictions. A more rigorous approach resembles how publishers evaluate disclosure and monetization in ethical AI advice packaging: what looks like a simple product may hide complicated rights, dependencies, and incentive structures. Legal, privacy, and security teams need a common checklist before the pilot starts.

Security controls, testing, and assurance artifacts

Do not accept generic assurances that the AI system is “secure.” Request specific artifacts: a recent SOC 2 or ISO 27001 report, penetration testing summary, vulnerability management policy, SDLC details, key management architecture, incident response procedures, and subprocessor list. For AI-specific assurance, ask for red-teaming results, prompt-injection testing, jailbreak resilience testing, and model abuse monitoring. If the vendor supports customer-managed keys, data residency selection, or tenant-level isolation, verify those controls are actually implemented in your tenant, not just marketed in a brochure.

Where possible, ask for reproducible test results, not just marketing claims. Security teams should treat the vendor as they would a critical cloud dependency, with documented control validation and periodic reassessment. In environments where false positives and reputation damage are costly, lessons from digital reputation management are useful: detection without context creates chaos, and AI controls without evidence create false confidence. Your goal is to confirm the control works under realistic adversarial conditions.

Conflict-of-interest discovery and procurement integrity

One of the most important takeaways from the LAUSD story is that conflict checks are not optional in high-stakes AI deals. Ask every decision-maker, evaluator, sponsor, and signer to disclose financial ties, advisory roles, board seats, referral arrangements, family relationships, prior employment, or side agreements with the vendor. Review meeting notes, email introductions, and pilot approvals for signs of undisclosed advocacy. If a vendor is being championed by a single executive without a broader cross-functional review, halt the process until governance is restored.

This is particularly important because AI products are often sold through pilot programs that feel informal. A small pilot can quietly become enterprise scope without standard procurement controls, just as fast-moving campaigns can scale faster than oversight in launch strategy playbooks. Require a documented approval chain, competitive evaluation, and conflict attestation before moving from pilot to production.

Contractual Safeguards That Belong in Every AI Vendor Agreement

Non-negotiable clauses security teams should demand

The contract is where due diligence becomes enforceable. At minimum, include data processing terms, breach notification timelines, subprocessor approval or notice rights, audit rights, incident cooperation obligations, service-level commitments, exportability and deletion commitments, and a warranty that the vendor will not train shared models on your data without explicit opt-in. Also require a clause obligating the vendor to notify you before any material change to model architecture, hosting region, ownership, or subcontractor chain.

Where the vendor offers generative AI, add output-use limitations and indemnity provisions specific to intellectual property claims, privacy violations, and harmful content. If the vendor refuses audit rights, offers only vague “reasonable security” language, or excludes consequential support during incidents, treat that as a risk signal rather than a negotiation detail. The best contract language is operationally testable, similar to how cloud downtime lessons translate outages into concrete resilience requirements. A vendor agreement should give you a right to verify, not just a promise to trust.

Termination, deletion, and portability terms

AI tools can create sticky data dependencies: embeddings, evaluation logs, fine-tuning artifacts, and conversation histories may be hard to export or fully delete. Your contract should specify what gets returned at termination, in what format, within what time frame, and with what deletion attestation. Require that the vendor delete prompts, outputs, embeddings, backups, and derived datasets within a defined retention window unless retention is legally required. If the vendor uses customer feedback to improve shared models, require explicit opt-in and a documented mechanism to withdraw consent.

These terms protect you from vendor lock-in and post-termination data exposure. They also make vendor transitions more manageable, which matters when tools are embedded in workflows across engineering, procurement, and support. Teams accustomed to building migrations can borrow from cloud migration blueprints: exit planning is not an afterthought; it is part of the architecture.

Operationalizing Third-Party Assessment for AI

Build an AI-specific assessment questionnaire

Your third-party assessment should include questions that standard TPRM forms never ask. Examples include: What models are used, and are they proprietary or open source? What was the training data provenance? Are customer prompts retained, logged, or used for retraining? What human review exists for unsafe outputs? What country and cloud region host the data? What subprocessors can access it? What controls prevent prompt injection, data exfiltration, and unauthorized tool execution?

Make the questionnaire evidence-based. For each answer, require artifacts such as architecture diagrams, policy excerpts, test reports, or contract exhibits. If the vendor cannot produce evidence, score the control as absent, not “planned.” This approach mirrors disciplined category management in other domains: when evaluating multiple options, you need a structured comparison, much like choosing between tools or services in a complex deal-day decision.

Create a risk tiering model for AI suppliers

Not every AI vendor deserves the same level of scrutiny, but every vendor needs a minimum bar. Tier vendors based on data sensitivity, autonomy, regulatory exposure, integration depth, and model criticality. A low-risk content assistant may only require basic privacy and security review, while an AI system that drafts customer communications, analyzes regulated records, or triggers automated actions should trigger full legal, security, privacy, and business continuity review. The more the tool can act on its own, the more it must be treated like a privileged system.

For teams already managing a large vendor ecosystem, this tiering model can reduce overload. It helps avoid drowning in paperwork while preserving rigor where it matters most. Similar principles appear in AI agents playbooks: automation is valuable, but only when bounded by the right controls. Risk tiering gives procurement a scalable way to focus on the suppliers that matter most.

Monitor continuously, not just at onboarding

Vendor due diligence is a living process because AI vendors change faster than traditional software suppliers. New foundation models, new hosting arrangements, new subprocessors, and new data policies can all alter your risk profile after signature. Establish quarterly or semiannual reviews for high-risk AI vendors, and monitor external signals such as breach disclosures, legal filings, model updates, and ownership changes. If the vendor publishes changelogs, subscribe and map them to your internal control owners.

It is also wise to track technical performance drift. If output quality changes, hallucination frequency increases, or latency patterns suggest a platform migration, the vendor may have altered something material. Think of this as the AI equivalent of watching for outages or service instability in high-dependency cloud environments: the absence of a breach does not mean the system is stable. Continuous validation is the only safe posture.

Comparison Table: Standard SaaS Review vs. AI Vendor Due Diligence

Assessment AreaStandard SaaSAI VendorWhat Security Teams Should Demand
Ownership and controlLegal entity and basic financial checkLegal entity, beneficial owners, founder ties, advisers, resellersConflict disclosure and beneficial ownership review
Data handlingPrivacy policy, encryption, retention statementPrompt retention, training use, embeddings, logs, subprocessorsData flow map, retention limits, opt-out or opt-in for training
Product behaviorFeature list and uptime SLAModel version, evaluation data, drift, hallucination riskProvenance docs, change notices, testing evidence
IP riskBasic indemnityTraining data licenses, output rights, third-party model dependenciesAI-specific indemnity and ownership warranties
AssuranceSOC 2 or ISO 27001SOC 2 plus red teaming, jailbreak testing, misuse monitoringAI-specific test artifacts and periodic reassessment

A Practical AI Vendor Due Diligence Checklist

Governance and procurement checks

Before approval, verify that the buying process includes procurement, security, legal, privacy, finance, and business owners. Require a written business justification, a documented risk tier, and a conflict-of-interest attestation from all approvers. Confirm competitive evaluation or sole-source justification exists, and that the pilot cannot expand without re-approval. If the vendor relationship was introduced by an executive, insist on independent review and record the disclosure.

Technical and data checks

Validate architecture diagrams, model sources, hosting regions, subprocessor lists, logging behavior, and data retention settings. Test the product with synthetic data to observe whether it retains prompts, exposes sensitive information, or behaves inconsistently when asked about boundaries. Ask for red-team results, model update controls, incident response playbooks, and role-based access controls. If the vendor cannot explain its data lifecycle in plain language, it probably cannot secure it in practice.

Include clear warranties about non-use of customer data for training without consent, ownership of outputs, IP non-infringement, deletion on termination, breach notification, and audit rights. Confirm the vendor will notify you of any model change that materially affects performance, safety, or data handling. Add strong subcontractor notification provisions and a right to object to new subprocessors where risk is material. For high-risk deployments, require contractual commitments for support during investigations and evidence preservation.

Pro Tip: Treat every AI vendor as if it could become a critical dependency overnight. If a product is successful, adoption will expand faster than your original risk review. Build the controls now, because retrofitting them after operational dependence is expensive and politically difficult.

How Security Teams Can Make the Process Scalable

Standardize evidence packs

To keep AI due diligence from becoming a bottleneck, create a standard evidence pack vendors must submit. Include a security overview, architecture diagram, model provenance summary, data flow map, subprocessors, incident response contacts, and contract redlines. Standardization reduces reviewer fatigue and lets security teams compare vendors consistently. It also sends a signal that your organization takes AI procurement security seriously.

Use risk-based exceptions, not ad hoc approvals

Some vendors will fail a control but still be acceptable if compensating controls exist. When that happens, use a documented exception process with an expiration date, compensating measures, and named owner. Do not allow informal “business says it’s fine” exceptions to bypass review. This discipline helps prevent the kind of drift that occurs when teams move quickly without governance, a pattern familiar to anyone who has watched fast-growing programs outrun their controls in growth-stage launches.

Align procurement with ongoing monitoring

The best AI vendor programs connect procurement with security monitoring and renewal decisions. Feed contract dates, control attestations, incident alerts, and model-change notices into a single vendor risk register. Require periodic recertification for high-risk suppliers and an annual reevaluation of business necessity. When the vendor is no longer needed, terminate cleanly and verify deletion. That is how you turn one-time diligence into a defensible governance process.

Conclusion: What the LAUSD Lesson Means for Every AI Buyer

The LAUSD investigation illustrates a hard truth: AI vendor risk is not just about technology performance, and it is not solved by a security questionnaire alone. It is about governance integrity, conflict transparency, data stewardship, and contractual leverage. If a public school district can face scrutiny over ties to a defunct AI company, then any enterprise buying AI must assume its own vendors will eventually be examined with the same intensity. The smartest teams will not wait for that moment; they will build due diligence that stands up to it.

For organizations that want a mature program, the path forward is clear. Start with a governance layer, require provenance and data transparency, investigate conflicts, demand enforceable contract terms, and keep monitoring after launch. Pair that with the operational rigor used in high-audit environments and the change discipline found in cloud resilience. That combination turns AI procurement from a leap of faith into a controlled, repeatable risk management process.

FAQ: AI Vendor Due Diligence After the LAUSD Lesson

What is the most important thing to check first in AI vendor due diligence?

Start with ownership, governance, and conflict-of-interest checks. If you cannot verify who controls the vendor, who benefits from the deal, and whether any internal approver has undisclosed ties, the rest of the review may be compromised. Then move into model provenance and data handling.

How is AI vendor risk different from ordinary SaaS vendor risk?

AI vendor risk includes model behavior, training data provenance, prompt retention, output ownership, and silent model changes. Traditional SaaS reviews usually focus on security and uptime, but AI adds IP ambiguity, data reuse concerns, and unpredictable system behavior. That makes evidence and contractual precision much more important.

Should a vendor be allowed to train on customer data by default?

No. The safest default is no training on customer data unless the buyer explicitly opts in. If the vendor wants to use prompts, documents, or feedback to improve shared models, that should be a separate negotiated term with clear retention limits and revocation rights.

What contract clauses are non-negotiable for high-risk AI tools?

You should demand data processing terms, breach notification timelines, deletion obligations, audit rights, subcontractor visibility, model change notices, ownership and IP warranties, and an explicit statement about whether customer data is used for training. For high-risk use cases, also require incident cooperation and evidence preservation.

How often should AI vendors be reassessed after onboarding?

High-risk vendors should be reassessed at least quarterly or semiannually, and immediately if there is a model change, ownership change, breach, regulatory event, or unusual behavior shift. AI vendors evolve quickly, so onboarding review is only the starting point.

What if the vendor refuses to provide model provenance?

That should be treated as a significant risk issue. If the vendor cannot explain what model it uses, where it came from, what data trained it, and what changes have been made, you may be unable to assess legal, privacy, or security exposure. In many cases, refusal to disclose provenance is reason enough to halt procurement.

Advertisement

Related Topics

#third-party-risk#ai-governance#vendor-management
D

Daniel Mercer

Senior Cybersecurity Editor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-04-16T17:05:44.880Z