AI Policy Template for Teams
AI tools are showing up in every workplace, whether leadership has approved them or not. Without a clear policy, your team is making individual decisions about what data to share with AI, which tools to use, and how much to trust the output. This template provides a framework you can adapt for your organization. Copy the section headings and fill in the details that match your business.
Section 1: Purpose and Scope
State why this policy exists and who it applies to.
- Purpose: This policy establishes guidelines for the acceptable use of artificial intelligence tools by [Organization Name] employees, contractors, and vendors. It ensures that AI is used productively while protecting sensitive data, maintaining compliance, and upholding quality standards.
- Scope: This policy applies to all personnel who use AI tools for work-related tasks, including but not limited to: AI chatbots and assistants, AI-powered writing tools, AI code generation tools, AI image generators, and AI features embedded in existing software (such as Microsoft Copilot or Google Gemini).
- Effective date: [Date]
- Policy owner: [Name/Title responsible for maintaining and updating this policy]
- Review schedule: This policy will be reviewed and updated at least every six months due to the rapidly evolving nature of AI technology.
Section 2: Approved AI Tools
Specify which tools are authorized and which are not. This prevents shadow AI use and ensures data goes only to vetted platforms.
- Approved tools: List the specific AI tools your organization has reviewed and approved. For each tool, note the approved plan tier (free, pro, enterprise) and any restrictions on use. Example:
- Microsoft Copilot (via our Microsoft 365 E5 license) - approved for all business use
- ChatGPT Team - approved for content drafting and research only
- Claude Pro - approved for document analysis and writing
- Prohibited tools: Any AI tool not on the approved list requires written approval from [IT Manager/CISO] before use. Free tiers of consumer AI products are not approved for business use due to data training concerns.
- Evaluation process: To request approval for a new AI tool, submit a request to [contact] that includes: the tool name, intended use case, data types that will be processed, the vendor's privacy policy and data handling practices, and the cost.
Section 3: Data Classification Rules
This is the most critical section, especially for healthcare organizations. Define exactly what data can and cannot be entered into AI tools.
- Never enter into any AI tool:
- Protected Health Information (PHI) - patient names, dates of birth, medical record numbers, diagnoses, treatment information, insurance information, or any of the 18 HIPAA identifiers
- Social Security numbers
- Financial account numbers (credit cards, bank accounts)
- Passwords, API keys, or access credentials
- Attorney-client privileged communications
- Trade secrets or proprietary formulas
- May enter with caution (using approved enterprise tools only):
- De-identified data that has been stripped of all 18 HIPAA identifiers
- Internal policies and procedures (non-confidential)
- General business communications
- Publicly available information
- Freely usable with any approved tool:
- General knowledge questions
- Writing assistance with non-sensitive content
- Brainstorming and ideation
- Public-facing content drafting
- Learning and professional development
Section 4: Quality and Review Requirements
AI output requires human review before it is used. Define the review expectations.
- All AI-generated content must be reviewed by a qualified person before use. AI can produce plausible-sounding but incorrect information. The person using the output is responsible for its accuracy.
- Fact-check claims. If AI generates statistics, legal references, or medical information, verify them against authoritative sources before including them in any business document.
- Review for bias. AI can reflect biases present in its training data. Review AI output for inappropriate assumptions, stereotyping, or discriminatory language.
- Do not use AI for final clinical decisions. AI may assist with research or drafting, but clinical decisions must be made by licensed professionals using their professional judgment.
- Legal and compliance review. Any AI-generated content that will be used in contracts, compliance documentation, or regulatory filings must be reviewed by the appropriate legal or compliance professional.
Section 5: Attribution and Transparency
Define when and how AI use should be disclosed.
- Internal documents: Note when AI was used to draft or substantially edit internal documents. A simple footnote ("Drafted with AI assistance") is sufficient.
- External communications: [Organization Name] will determine on a case-by-case basis whether AI use needs to be disclosed in external communications. When in doubt, disclose.
- Client-facing work: If AI is used to generate deliverables for clients, discuss with your manager whether disclosure is appropriate or required.
- Do not represent AI output as original human work in contexts where that distinction matters (academic submissions, expert opinions, sworn statements).
Section 6: Monitoring and Compliance
- Usage logging: The organization reserves the right to monitor AI tool usage through enterprise admin consoles to ensure compliance with this policy.
- Incident reporting: If you accidentally enter sensitive data into an AI tool, report it immediately to [IT Manager/Privacy Officer] at [contact information]. Prompt reporting allows us to assess the risk and take appropriate action.
- Policy violations: Violations of this policy will be handled through the standard disciplinary process outlined in the employee handbook. Violations involving PHI may also trigger HIPAA breach assessment procedures.
- Training requirement: All employees must complete AI acceptable use training before using AI tools for work. Training will be refreshed annually or when significant policy changes occur.
Section 7: Acknowledgment
Include a signature block for employees to confirm they have read, understood, and agree to follow this policy. Keep signed copies on file.
Getting Started
You do not need to implement every section at once. Start with the data classification rules (Section 3) and approved tools list (Section 2). These two sections address the highest risks. Build out the remaining sections as your organization's AI use matures.
Need help developing your AI policy?
We help teams create practical AI policies and provide the training to back them up.
Book a Session