Purpose mamp; Scope
This policy governs the use of artificial intelligence tools and systems by all employees, contractors, and vendors operating on behalf of the organization. It establishes minimum standards for responsible AI use, defines approval requirements, and protects employees, customers, and the organization from foreseeable AI-related harms.
This policy applies to all AI tools, whether procured by the organization, accessed through personal accounts for work purposes, or embedded in existing software platforms.
AI tools augment human judgment. No AI system may make final decisions about employment, compensation, credit, legal matters, or individual assessment without a documented human review step.
Approved Tools
Employees may only use AI tools that appear on the organizational approved tools registry. Tools are classified by use tier. Using unapproved tools for work tasks is a policy violation.
| Tool | Category | Approved Use Cases | Status |
|---|---|---|---|
| Microsoft Copilot (M365) | Productivity / Writing | Drafting, summarizing internal docs, meeting notes, email assistance | Approved |
| GitHub Copilot | Code Assistance | Code suggestions, documentation, test generation for non-sensitive repos | Approved |
| Claude (Anthropic) | Research / Analysis | Research synthesis, policy drafting, non-sensitive analysis tasks | Approved |
| ChatGPT (OpenAI) | General LLM | Non-sensitive drafting and research; no customer data, no PII | Restricted |
| Grammarly | Writing Assistant | Grammar and tone assistance for external communications; no confidential content | Restricted |
| Unapproved LLMs | Any | Not permitted for any work purpose until intake and approval process is completed | Prohibited |
The AI Governance Lead maintains the approved tools registry. Employees may request new tool reviews through the procurement approval flow (Section 5).
Sensitive Data Rules
The following data categories must never be entered into any AI tool, approved or otherwise, unless the tool has been specifically reviewed and approved for that data type with appropriate contractual protections in place.
| Data Category | Examples | Rule |
|---|---|---|
| Personally Identifiable Information (PII) | Names, SSNs, DOB, addresses, employee IDs | Prohibited in all AI tools unless tool is specifically approved for PII processing with DPA in place |
| Customer Data | Account details, transaction records, contact information | Prohibited. Customer data may not be used as AI input without explicit contractual authorization |
| Financial Data | Revenue figures, forecasts, Mmamp;A details, unreported earnings | Prohibited in external AI tools. Internal approved tools only with Finance approval |
| Legal Matter Details | Litigation, regulatory inquiries, privileged communications | Prohibited. Legal holds attorney-client privilege considerations. No AI processing without Legal review |
| Employee Performance Data | Reviews, compensation, disciplinary records, medical | Prohibited. HR owns this data classification. No AI processing of employee records without CHRO approval |
| Proprietary Source Code | Unreleased product code, security configurations | Engineering may use approved code assistants only for non-sensitive repos. Core IP requires CTO approval |
If you are unsure whether data is appropriate to enter into an AI tool, do not enter it. Submit a question to the AI Governance Lead before proceeding. Err on the side of caution.
Human Review Standards
AI-generated outputs must not be used as final work product in the following contexts without documented human review. "Human review" means a qualified person has read, assessed, and taken responsibility for the output before it is acted upon or distributed.
- Hiring and employment decisions: AI-assisted screening, scoring, or assessment tools may not produce final hiring recommendations without recruiter and manager review
- Customer-facing communications: AI-drafted external communications must be reviewed by the responsible team member before sending
- Legal and compliance documents: AI-drafted contracts, policies, or compliance filings require Legal review before use
- Financial reporting: AI-generated data summaries or analyses used in reporting require Finance review and sign-off
- Medical or safety guidance: Any AI output that could affect physical safety or health requires review by a qualified professional
- Performance evaluations: AI-assisted performance scoring may not appear in an employee's record without manager review and written acknowledgment
- Public statements or press materials: AI-generated content intended for public distribution requires Communications and executive review
Incident Response
An AI incident is any event in which an AI tool produces output that causes or risks causing harm to a person, the organization, or a third party. This includes: harmful content, discriminatory output, data exposure, security compromise, or significant factual error acted upon in a business decision.
| Step | Action | Owner | Timeline |
|---|---|---|---|
| 1 | Stop use of the AI tool involved. Preserve the input and output that caused the incident. | Employee who identified incident | Immediately |
| 2 | Report to direct manager and submit incident report via the AI incident form (linked in the intranet). | Employee + Manager | Within 2 hours |
| 3 | AI Governance Lead and CISO assess severity. Classify as Low / Medium / High / Critical. | AI Governance Lead, CISO | Within 4 hours |
| 4 | For Medium and above: suspend tool organization-wide pending review. Notify Legal if data exposure or regulatory risk is present. | CISO, Legal | Within 8 hours |
| 5 | Conduct root cause analysis. Document findings and remediation steps. Update tool registry and policy as needed. | AI Governance Lead | Within 5 business days |
| 6 | Brief executive leadership on Critical incidents. Board notification for incidents with legal, regulatory, or reputational exposure. | CISO, General Counsel | Within 24 hours of Critical classification |
Procurement Approval Flow
All AI tools must complete this approval process before organizational use. No exceptions. Business units may not use AI tools acquired through personal or departmental spend to circumvent this process.
Standard review timeline is 10 business days. High-risk tools may require up to 30 days. Expedited reviews require VP-level sponsorship and are not guaranteed.