Governance

Writing a Company AI Policy: What Middle Market Businesses Need to Cover

Every business using AI tools needs a written policy, most do not have one, and the legal and operational risks of that gap are real.

Best for:Teams starting with AIOperators & finance leadsIT & compliance teams
Use this perspective to choose the right AI lane before jumping into a deeper implementation conversation.

Key takeaways

  • A practical AI policy covers five things: data classification, approved tools list, review requirements, confidentiality rules, and a prohibition list.
  • Keep it to one page, a policy nobody reads provides zero protection and zero guidance.
  • The legal risk of no policy is concrete: data breach liability, NDA violations, and IP ownership disputes all become harder to manage without written governance.

In this article

  1. Why a written AI policy is not optional
  2. Section 1: Data classification, what can and cannot go into AI tools
  3. Section 2: Approved tools list
  4. Section 3: Review requirements for external-facing content
  5. Section 4: Confidentiality requirements
  6. Section 5: Prohibition list
  7. How to keep the policy to one page
  8. Common mistakes in AI policy writing
  9. Policy structure: the 5 sections every AI policy needs
  10. Risk tiering by use case: a reference table for employees
  11. Employee acknowledgment and rollout process

Why a written AI policy is not optional

Every employee using an AI tool is making a data-sharing decision. Without a written policy, those decisions are made individually, inconsistently, and without awareness of the legal and contractual implications. One employee inputs a client's financial projections into a free ChatGPT account. Another uses Otter.ai to transcribe a board meeting. A third drafts a legal response using Claude with confidential deal terms in the prompt. Each of these actions may violate an NDA, a data processing agreement, or an IP ownership clause, without anyone intending to.

The written policy does not prevent all risk. It creates a clear standard of care that protects the company and gives employees an unambiguous reference point. Without it, you cannot enforce a violation, you cannot train against it, and you cannot demonstrate to a buyer or regulator that you exercised reasonable governance.

Research finding
IBM Cost of a Data Breach Report

Companies with formal AI and data governance policies experience 35% lower average breach costs than companies without, primarily because documented policies establish incident response procedures and limit liability exposure.

The legal risk of no AI policy is not hypothetical. If an employee inputs confidential client data into a non-enterprise AI tool that uses inputs for model training, and that data is later surfaced to another user, the company faces potential NDA breach, data breach notification obligations, and potential liability to the client, all of which are harder to defend without a written governance framework that the employee violated.

Section 1: Data classification, what can and cannot go into AI tools

Data classification is the foundation of the policy. Without it, every other section is unenforceable because employees do not know what "sensitive data" means in the context of their day-to-day work.

AI Tool Data Classification Framework

Data ClassCan Enter AI ToolsCannot Enter AI ToolsNotes
Public dataYes, no restrictionn/aPublished reports, public filings, website content
Internal drafts and communicationsYes, with approved toolsNon-approved toolsMust use approved enterprise-tier tool
Meeting notes and summariesYes, with approved toolsNon-approved toolsNo client names or deal terms in non-approved tools
Client names and contact informationOnly in approved tools with DPAAny tool without enterprise DPAEven first names + company counts as PII in some jurisdictions
Financial data under NDANo, approved tool only with explicit approvalAll other toolsIncludes deal projections, LOI terms, cap table data
Customer PII (SSN, payment, health)No, never in AI tools without explicit legal reviewAll AI toolsTreat as off-limits by default
Trade secrets and IPNoAll AI toolsFormulas, proprietary processes, unreleased product roadmaps
Legal documents in active mattersNoAll AI toolsContracts under negotiation, litigation documents, compliance filings

Scroll to see more →

The classification framework should be communicated as a decision tree, not a table. Employees make faster, more consistent decisions when they can answer: is this data public? Is it client-specific? Is it financially sensitive? The policy should make those questions explicit.

Research finding
FTC Business Guidance on AI

Regulators treat data governance as a process question, not just an outcome question. A company that had a written data classification policy and an employee violated it is in a materially different position than a company with no policy at all.

Section 2: Approved tools list

The approved tools list is the most operationally important section of the policy. It tells employees exactly which tools they can use, for what purposes, and under what conditions.

Every tool on the list should have three things: the tool name and tier (enterprise vs. personal), the approved use cases, and the data class permitted. Naming tools specifically is not optional — "enterprise AI tools with data agreements" is not actionable. "Claude for Business (Anthropic), approved for internal draft writing, variance narrative, meeting summaries; not approved for client PII or financial data under NDA" is actionable.

Specific tools to address by name in a typical middle market policy: ChatGPT (OpenAI), distinguish between ChatGPT Plus (personal, no enterprise DPA) and ChatGPT Enterprise (enterprise DPA, data not used for training); Claude — Claude.ai (personal tier) vs. Claude for Business or API access (enterprise tier); Microsoft Copilot, typically covered under enterprise Microsoft 365 agreement, highest data protection; Grammarly — Grammarly Business has an enterprise DPA; Grammarly Free does not; Otter.ai — Otter.ai Business has an enterprise DPA; Otter.ai Free does not; Firefly and Granola, evaluate DPA status before approving for meeting content.

$20–$30/user/month

typical cost of ChatGPT Enterprise or Claude for Business enterprise tier

$0

cost of many consumer AI tiers that lack enterprise data agreements, and the compliance gap they create

2–5 minutes

time to check whether a tool has an enterprise DPA before approving it for company use

Working through this yourself?

Kolton works directly with founders on M&A readiness, deal structure, and AI implementation — one advisor, not a team of generalists.

Schedule a conversation →

Section 3: Review requirements for external-facing content

All AI-generated content going external must be reviewed by a human before it is sent. This is not optional and not discretionary, and it is a categorical requirement that applies regardless of who generated the content or how confident they are in the output.

The policy should define "external" clearly: any content going to a client, prospect, vendor, partner, regulator, or legal counterparty. Internal documents, variance commentary, meeting notes, draft memos, do not require the same review standard, though the employee is still responsible for accuracy.

AI-generated external communications that bypass human review create direct legal and commercial risk. A factually incorrect proposal sent to a prospect, a compliance certification drafted by an LLM with a hallucinated regulatory reference, or a contract email with incorrect deal terms are all more likely with AI assistance without review than with a manual writing process. The review requirement is not about distrust of AI, and it is about maintaining human accountability for external representations.

1

Step 1: Draft with AI, use approved tool for first draft of any external communication

2

Step 2: Review for factual accuracy, employee checks all specific claims, numbers, and representations

3

Step 3: Review for tone and context, which is this appropriate for this specific recipient and relationship?

4

Step 4: Check for data exposure, and does the draft contain client data or confidential terms that should not be in this communication?

5

Step 5: Send, employee takes ownership of the final communication as if they wrote it

6

Step 6: Document if flagged; if AI output had a significant error caught in review, log it for policy refinement

Section 4: Confidentiality requirements

Employees cannot input client data into AI tools without written approval from a designated approver. This section of the policy should name the approver (typically the COO, CFO, or General Counsel) and define the approval process.

The approval process does not need to be complex. A simple email from the employee describing the use case and the data type, with written approval from the designated approver, is sufficient. The goal is to create a deliberate decision point, not a bureaucratic obstacle.

The confidentiality section should also address contractor and vendor use of AI tools. If a bookkeeper, marketing agency, or legal firm uses AI tools to process company data, the same standards apply. Require vendors to disclose AI tool usage in their services agreements, and require that any AI tools they use for company data meet the same enterprise-tier standards.

Research finding
IAPP Privacy Tech Vendor Report

78% of mid-market companies have no formal process for assessing AI tool usage by third-party vendors who process their data. This creates a compliance blind spot that is increasingly scrutinized in M&A diligence and regulatory audits.

Section 5: Prohibition list

The prohibition list is a short, explicit list of things AI tools cannot be used for, regardless of tool, data classification, or approver authorization.

Standard prohibition list for a middle market company: no AI-generated legal advice or legal interpretations without attorney review; no AI for compliance certifications (SOC 2 controls, ISO declarations, regulatory certifications); no AI-generated code deployed to production without engineering review and testing; no AI for making or documenting HR decisions (hiring, termination, performance reviews) without human decision-making documentation; no AI for financial certifications or audit representations.

The prohibition list exists because some use cases carry liability that no review process fully mitigates. Legal advice, compliance certifications, and production code all have professional liability, regulatory liability, and safety implications that make AI-only generation unacceptable regardless of how good the output looks.

How to keep the policy to one page

The most common AI policy failure is length. Policies written by legal counsel tend toward exhaustive coverage, every edge case, every liability qualifier, every definition. The result is a document that employees do not read, cannot remember, and therefore do not follow.

A one-page policy forces prioritization. The five sections above, data classification, approved tools, review requirements, confidentiality, prohibitions, cover 90% of the practical risk for a middle market company. The other 10% can go in a separate legal addendum that employees acknowledge but do not need to memorize.

Structure the one-page policy with five numbered sections, each with 2–3 bullet points. Use plain language: "You cannot input client financial data into ChatGPT Plus" is better than "Employees are prohibited from transmitting Category III data assets to non-enterprise-tier generative AI tools absent written authorization from a designated data governance officer."

Write the AI policy at a 7th-grade reading level. Use the same language you would use to explain it to a new employee on their first day. If you need to use a legal term, define it immediately. The standard of care you are establishing is for all employees, not just those with legal or compliance backgrounds.

Common mistakes in AI policy writing

Too long, nobody reads it. A 15-page AI policy provides the appearance of governance without the substance. Keep it to one page with a legal addendum for edge cases.

Too vague — "use responsibly" is not enforceable. A policy that says employees should "exercise good judgment" and "use AI tools responsibly" provides no guidance and no enforcement mechanism. Specificity is the point.

No enforcement mechanism. A policy without consequences is a suggestion. Define the consequence for each type of violation: inadvertent first-time violations (coaching), intentional or repeat violations (escalated HR process), client data exposure (immediate incident response protocol). The enforcement mechanism does not need to be draconian, and it needs to exist.

No annual review cadence. The AI tool landscape changes faster than any other technology category. A policy written in 2024 may be materially outdated by 2026, new tools, new DPAs, new regulatory guidance. Schedule the annual review and assign an owner.

No acknowledgment process. Distribute the policy, get written acknowledgment from every employee (email confirmation is sufficient), and retain the acknowledgments. In the event of a policy violation, the company's position is materially stronger if it can demonstrate the employee was explicitly aware of the policy.

1 page

target length for a practical, readable AI acceptable use policy

12 months

maximum interval between policy reviews in the current AI tool landscape

100%

target employee acknowledgment rate, retain confirmations on file

Policy structure: the 5 sections every AI policy needs

A middle market AI acceptable use policy does not need to cover every edge case. It needs to cover the five areas where the actual risk lives. Five sections, each with 2–3 bullet points, is the right structure.

1

Section 1: Scope

Define which tools and employees are covered. Name the specific AI tools in scope (ChatGPT, Claude, Copilot, Otter.ai, Grammarly). State that all employees, contractors, and vendors processing company data are subject to the policy.

2

Section 2: Permitted Uses

Three tiers: unrestricted (internal brainstorming, email drafts, meeting summaries using approved tools with no sensitive data); review-required (customer-facing content, financial analysis, vendor proposals — human review before use); approval-required (legal document generation, pricing changes, external reports with company data — designated approver sign-off required).

3

Section 3: Prohibited Uses

Personal health data of employees or customers; impersonation of any individual; automated decisions without human review; confidential client data in non-enterprise AI tools; compliance certifications or legal representations generated by AI without attorney review; production code deployment without engineering review.

4

Section 4: Data Handling Rules

Define what data can and cannot be entered into AI tools by data class. Customer PII, financial projections under NDA, trade secrets, and active legal matter documents require approved enterprise-tier tools with data processing agreements. Public data and internal drafts have no restriction beyond the approved tools list.

5

Section 5: Accountability and Enforcement

Name the policy owner (typically COO, CFO, or General Counsel). Define how violations are reported (direct to manager, or a designated reporting channel). State the consequences: inadvertent first violation — coaching conversation; intentional or repeat violation — escalated HR process; client data exposure — immediate incident response protocol.

Keep the policy to one page. Five sections, plain language, specific tool names, clear consequences. A policy nobody reads provides zero protection and zero guidance. The goal is a document employees can reference in 60 seconds and understand completely.

Risk tiering by use case: a reference table for employees

Employees make AI use decisions dozens of times per day. The policy needs to give them a fast reference, not a flowchart they will never use. A tiered use case table — bookmark it, print it, post it — is the most operationally useful policy element.

Use CaseRisk TierApproval Required?Notes
Internal brainstorming and ideationLowNoNo sensitive data; approved tool; output not used externally
Email drafts (internal)LowNoNo client names, deal terms, or confidential data in prompt
Meeting summaries (internal only)LowNoUse enterprise-tier tool only; no external meeting participants
Customer-facing communicationsMediumHuman review before sendingEmployee reviews all facts, numbers, and representations before delivery
Financial analysis for internal useMediumHuman review before useCheck calculations independently; do not share AI-generated numbers without verification
Vendor proposals and RFQ responsesMediumHuman review before sendingAccuracy review required; legal terms must be checked by appropriate reviewer
Legal document generationHighManager approval requiredAttorney review mandatory before any legal document is finalized or executed
HR decisions (hiring, termination, reviews)HighManager approval + HR reviewHuman decision-maker must document the basis for the decision independently of AI output
Pricing changes with external effectHighManager approval requiredFinance or leadership approval before pricing changes communicated externally
External reports with company dataHighManager approval requiredDesignated approver reviews data classification before distribution
Processing regulated data in unapproved toolsProhibitedNot permittedPII, health data, financial data under NDA — never in non-enterprise tools
Generating content impersonating a named individualProhibitedNot permittedCreating content that appears to come from a specific person without their consent
Fully automated customer decisions (no human review)ProhibitedNot permittedAll customer-facing decisions require a human in the approval loop

Scroll to see more →

Post this table in the employee handbook, the intranet, and wherever the approved tools list lives. Employees who cannot find the policy do not follow it.

Employee acknowledgment and rollout process

A policy that employees have not read is not a policy — it is a document. The rollout process determines whether the policy creates real governance or just the appearance of it.

1

Step 1: Draft with input from legal, IT, and HR

Legal reviews prohibited uses and data handling rules. IT confirms the approved tools list and enterprise DPA status of each tool. HR confirms the acknowledgment and enforcement process.

2

Step 2: Conduct a 30-minute training session

Cover the five policy sections with use case examples. Walk through the risk tier table. Take questions. Record the session for employees who cannot attend.

3

Step 3: Require signed acknowledgment

Electronic acknowledgment is sufficient — email confirmation, DocuSign, or an HRIS attestation. Retain acknowledgments on file. Target 100% completion within 30 days of policy distribution.

4

Step 4: Post in employee handbook and intranet

The policy must be findable in under 60 seconds. Embed the risk tier table as a separate reference document employees can bookmark.

5

Step 5: Audit compliance quarterly

If AI tool usage logs are available through enterprise software (Microsoft Copilot, Claude for Business), spot-check for unapproved tool usage. Review any reported violations and close the feedback loop with the reporting employee.

Why documentation matters for M&A: buyers in technology diligence will ask for your AI governance policy. A documented, signed-acknowledgment policy demonstrates governance maturity — specifically, that the company has established standards of care for AI use, trained employees on those standards, and retained evidence of compliance. This is the difference between a company with "AI in use" and a company with "AI governed," and buyers price that distinction in their assessment of operational risk and post-close scalability.

Frequently asked questions

What if an employee uses a non-approved tool and nothing goes wrong?

The outcome does not determine the violation. The policy violation occurred at the point of use, regardless of result. Address it through normal performance management, first instance is a coaching conversation, repeat instances are escalated. The purpose of the policy is to prevent the instances where something does go wrong, not to punish instances where luck mitigated the risk.

How do we handle tools that are not on the approved list but are not prohibited?

Establish a formal request process: employee submits the tool name, use case, and data type for review. Designated approver reviews DPA, assesses data risk, and responds within 5 business days. Approved tools are added to the list. Denied tools are documented with reason.

How long should the AI policy be?

One page. Two pages maximum. A policy that cannot be read in under 5 minutes will not be read. Use headers, bullet points, and plain language. Reserve the legal detail for a separate addendum if needed for compliance purposes.

How often should the policy be reviewed?

Annually at minimum. AI tool landscapes change rapidly, approved tools get acquired, DPAs change, new tools enter the market. Schedule an annual review in Q1 each year. Require employees to re-acknowledge the policy annually.

What is the IP ownership risk of no policy?

In most jurisdictions, AI-generated content has ambiguous IP ownership. If an employee creates a work product using AI tools that the company later wants to patent or protect as a trade secret, the absence of a policy governing AI-assisted creation creates a gap in the chain of ownership documentation.

Research sources

IBM Cost of a Data Breach ReportIAPP Privacy Tech Vendor ReportFTC Guidance on AI and Business Practices

Disclaimer: Financial figures and case studies in this article are illustrative, based on representative middle market assumptions, and are not guarantees of outcome. Statistical references are drawn from cited third-party research; individual transaction and operational results vary based on business characteristics, market conditions, and deal structure. This content is for informational purposes only and does not constitute legal, financial, or investment advice. Consult qualified advisors for guidance specific to your situation.

Explore adjacent topics

M&A Readiness

What private equity buyers look for in lower middle market diligence

Operational Discipline

Operational discipline is still the fastest path to credibility

Found this useful?Share on LinkedInShare on X

Next Step

Recognized a situation? A direct conversation is faster.

If a perspective maps to an active transaction, operating, or AI challenge, the right next step is a short discussion — not more reading.

Confidential inquiriesReviewed personally1 business day response target